00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 599 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3264 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.089 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/centos7-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.089 The recommended git tool is: git 00:00:00.089 using credential 00000000-0000-0000-0000-000000000002 00:00:00.091 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/centos7-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.116 Fetching changes from the remote Git repository 00:00:00.118 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.143 Using shallow fetch with depth 1 00:00:00.143 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.143 > git --version # timeout=10 00:00:00.167 > git --version # 'git version 2.39.2' 00:00:00.167 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.186 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.186 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.589 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.599 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.611 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:05.611 > git config core.sparsecheckout # timeout=10 00:00:05.621 > git read-tree -mu HEAD # timeout=10 00:00:05.637 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:05.657 Commit message: "inventory: add WCP3 to free inventory" 00:00:05.657 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:05.798 [Pipeline] Start of Pipeline 00:00:05.809 [Pipeline] library 00:00:05.810 Loading library shm_lib@master 00:00:05.810 Library shm_lib@master is cached. Copying from home. 00:00:05.824 [Pipeline] node 00:00:05.832 Running on VM-host-SM4 in /var/jenkins/workspace/centos7-vg-autotest 00:00:05.834 [Pipeline] { 00:00:05.843 [Pipeline] catchError 00:00:05.844 [Pipeline] { 00:00:05.853 [Pipeline] wrap 00:00:05.860 [Pipeline] { 00:00:05.866 [Pipeline] stage 00:00:05.868 [Pipeline] { (Prologue) 00:00:05.885 [Pipeline] echo 00:00:05.886 Node: VM-host-SM4 00:00:05.890 [Pipeline] cleanWs 00:00:05.900 [WS-CLEANUP] Deleting project workspace... 00:00:05.900 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.906 [WS-CLEANUP] done 00:00:06.094 [Pipeline] setCustomBuildProperty 00:00:06.221 [Pipeline] httpRequest 00:00:06.244 [Pipeline] echo 00:00:06.245 Sorcerer 10.211.164.101 is alive 00:00:06.251 [Pipeline] httpRequest 00:00:06.254 HttpMethod: GET 00:00:06.255 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.255 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.261 Response Code: HTTP/1.1 200 OK 00:00:06.261 Success: Status code 200 is in the accepted range: 200,404 00:00:06.261 Saving response body to /var/jenkins/workspace/centos7-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.133 [Pipeline] sh 00:00:08.417 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.435 [Pipeline] httpRequest 00:00:08.455 [Pipeline] echo 00:00:08.456 Sorcerer 10.211.164.101 is alive 00:00:08.463 [Pipeline] httpRequest 00:00:08.468 HttpMethod: GET 00:00:08.468 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:08.468 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:08.474 Response Code: HTTP/1.1 200 OK 00:00:08.475 Success: Status code 200 is in the accepted range: 200,404 00:00:08.475 Saving response body to /var/jenkins/workspace/centos7-vg-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:08.651 [Pipeline] sh 00:01:08.934 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:11.485 [Pipeline] sh 00:01:11.774 + git -C spdk log --oneline -n5 00:01:11.774 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:11.774 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:11.774 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:11.774 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:11.774 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:01:11.800 [Pipeline] withCredentials 00:01:11.812 > git --version # timeout=10 00:01:11.825 > git --version # 'git version 2.39.2' 00:01:11.987 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:11.989 [Pipeline] { 00:01:12.000 [Pipeline] retry 00:01:12.002 [Pipeline] { 00:01:12.024 [Pipeline] sh 00:01:12.309 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:14.229 [Pipeline] } 00:01:14.251 [Pipeline] // retry 00:01:14.255 [Pipeline] } 00:01:14.274 [Pipeline] // withCredentials 00:01:14.284 [Pipeline] httpRequest 00:01:14.304 [Pipeline] echo 00:01:14.305 Sorcerer 10.211.164.101 is alive 00:01:14.313 [Pipeline] httpRequest 00:01:14.318 HttpMethod: GET 00:01:14.318 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:14.319 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:14.328 Response Code: HTTP/1.1 200 OK 00:01:14.329 Success: Status code 200 is in the accepted range: 200,404 00:01:14.329 Saving response body to /var/jenkins/workspace/centos7-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:24.983 [Pipeline] sh 00:01:25.267 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:26.658 [Pipeline] sh 00:01:26.940 + git -C dpdk log --oneline -n5 00:01:26.940 caf0f5d395 version: 22.11.4 00:01:26.940 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:26.940 dc9c799c7d vhost: fix missing spinlock unlock 00:01:26.940 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:26.940 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:26.960 [Pipeline] writeFile 00:01:26.977 [Pipeline] sh 00:01:27.260 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:27.271 [Pipeline] sh 00:01:27.552 + cat autorun-spdk.conf 00:01:27.552 SPDK_TEST_UNITTEST=1 00:01:27.552 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.552 SPDK_TEST_BLOCKDEV=1 00:01:27.552 SPDK_TEST_DAOS=1 00:01:27.552 SPDK_RUN_ASAN=1 00:01:27.552 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:27.552 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:27.552 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:27.559 RUN_NIGHTLY=1 00:01:27.560 [Pipeline] } 00:01:27.573 [Pipeline] // stage 00:01:27.585 [Pipeline] stage 00:01:27.586 [Pipeline] { (Run VM) 00:01:27.596 [Pipeline] sh 00:01:27.877 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:27.877 + echo 'Start stage prepare_nvme.sh' 00:01:27.877 Start stage prepare_nvme.sh 00:01:27.877 + [[ -n 9 ]] 00:01:27.877 + disk_prefix=ex9 00:01:27.877 + [[ -n /var/jenkins/workspace/centos7-vg-autotest ]] 00:01:27.877 + [[ -e /var/jenkins/workspace/centos7-vg-autotest/autorun-spdk.conf ]] 00:01:27.877 + source /var/jenkins/workspace/centos7-vg-autotest/autorun-spdk.conf 00:01:27.877 ++ SPDK_TEST_UNITTEST=1 00:01:27.877 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.877 ++ SPDK_TEST_BLOCKDEV=1 00:01:27.877 ++ SPDK_TEST_DAOS=1 00:01:27.877 ++ SPDK_RUN_ASAN=1 00:01:27.877 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:27.877 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:27.877 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:27.877 ++ RUN_NIGHTLY=1 00:01:27.877 + cd /var/jenkins/workspace/centos7-vg-autotest 00:01:27.877 + nvme_files=() 00:01:27.877 + declare -A nvme_files 00:01:27.877 + backend_dir=/var/lib/libvirt/images/backends 00:01:27.877 + nvme_files['nvme.img']=5G 00:01:27.877 + nvme_files['nvme-cmb.img']=5G 00:01:27.877 + nvme_files['nvme-multi0.img']=4G 00:01:27.877 + nvme_files['nvme-multi1.img']=4G 00:01:27.877 + nvme_files['nvme-multi2.img']=4G 00:01:27.877 + nvme_files['nvme-openstack.img']=8G 00:01:27.877 + nvme_files['nvme-zns.img']=5G 00:01:27.877 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:27.877 + (( SPDK_TEST_FTL == 1 )) 00:01:27.877 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:27.877 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:27.877 + for nvme in "${!nvme_files[@]}" 00:01:27.877 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi2.img -s 4G 00:01:27.877 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:27.877 + for nvme in "${!nvme_files[@]}" 00:01:27.877 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-cmb.img -s 5G 00:01:27.877 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:27.877 + for nvme in "${!nvme_files[@]}" 00:01:27.877 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-openstack.img -s 8G 00:01:27.877 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:27.877 + for nvme in "${!nvme_files[@]}" 00:01:27.877 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-zns.img -s 5G 00:01:27.877 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:27.877 + for nvme in "${!nvme_files[@]}" 00:01:27.877 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi1.img -s 4G 00:01:27.877 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:28.137 + for nvme in "${!nvme_files[@]}" 00:01:28.137 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi0.img -s 4G 00:01:28.137 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:28.137 + for nvme in "${!nvme_files[@]}" 00:01:28.137 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme.img -s 5G 00:01:28.137 Formatting '/var/lib/libvirt/images/backends/ex9-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:28.137 ++ sudo grep -rl ex9-nvme.img /etc/libvirt/qemu 00:01:28.137 + echo 'End stage prepare_nvme.sh' 00:01:28.137 End stage prepare_nvme.sh 00:01:28.149 [Pipeline] sh 00:01:28.432 + DISTRO=centos7 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:28.432 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex9-nvme.img -H -a -v -f centos7 00:01:28.432 00:01:28.432 DIR=/var/jenkins/workspace/centos7-vg-autotest/spdk/scripts/vagrant 00:01:28.432 SPDK_DIR=/var/jenkins/workspace/centos7-vg-autotest/spdk 00:01:28.432 VAGRANT_TARGET=/var/jenkins/workspace/centos7-vg-autotest 00:01:28.432 HELP=0 00:01:28.432 DRY_RUN=0 00:01:28.432 NVME_FILE=/var/lib/libvirt/images/backends/ex9-nvme.img, 00:01:28.432 NVME_DISKS_TYPE=nvme, 00:01:28.432 NVME_AUTO_CREATE=0 00:01:28.432 NVME_DISKS_NAMESPACES=, 00:01:28.432 NVME_CMB=, 00:01:28.432 NVME_PMR=, 00:01:28.432 NVME_ZNS=, 00:01:28.432 NVME_MS=, 00:01:28.432 NVME_FDP=, 00:01:28.432 SPDK_VAGRANT_DISTRO=centos7 00:01:28.432 SPDK_VAGRANT_VMCPU=10 00:01:28.432 SPDK_VAGRANT_VMRAM=12288 00:01:28.432 SPDK_VAGRANT_PROVIDER=libvirt 00:01:28.432 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:28.432 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:28.432 SPDK_OPENSTACK_NETWORK=0 00:01:28.432 VAGRANT_PACKAGE_BOX=0 00:01:28.432 VAGRANTFILE=/var/jenkins/workspace/centos7-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:28.432 FORCE_DISTRO=true 00:01:28.432 VAGRANT_BOX_VERSION= 00:01:28.432 EXTRA_VAGRANTFILES= 00:01:28.432 NIC_MODEL=e1000 00:01:28.432 00:01:28.432 mkdir: created directory '/var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt' 00:01:28.432 /var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt /var/jenkins/workspace/centos7-vg-autotest 00:01:30.965 Bringing machine 'default' up with 'libvirt' provider... 00:01:31.532 ==> default: Creating image (snapshot of base box volume). 00:01:31.532 ==> default: Creating domain with the following settings... 00:01:31.532 ==> default: -- Name: centos7-7.8.2003-1711172311-2200_default_1720856857_904dfc7176c70649b0b9 00:01:31.532 ==> default: -- Domain type: kvm 00:01:31.532 ==> default: -- Cpus: 10 00:01:31.532 ==> default: -- Feature: acpi 00:01:31.532 ==> default: -- Feature: apic 00:01:31.532 ==> default: -- Feature: pae 00:01:31.532 ==> default: -- Memory: 12288M 00:01:31.532 ==> default: -- Memory Backing: hugepages: 00:01:31.532 ==> default: -- Management MAC: 00:01:31.532 ==> default: -- Loader: 00:01:31.532 ==> default: -- Nvram: 00:01:31.532 ==> default: -- Base box: spdk/centos7 00:01:31.532 ==> default: -- Storage pool: default 00:01:31.532 ==> default: -- Image: /var/lib/libvirt/images/centos7-7.8.2003-1711172311-2200_default_1720856857_904dfc7176c70649b0b9.img (20G) 00:01:31.532 ==> default: -- Volume Cache: default 00:01:31.532 ==> default: -- Kernel: 00:01:31.532 ==> default: -- Initrd: 00:01:31.533 ==> default: -- Graphics Type: vnc 00:01:31.533 ==> default: -- Graphics Port: -1 00:01:31.533 ==> default: -- Graphics IP: 127.0.0.1 00:01:31.533 ==> default: -- Graphics Password: Not defined 00:01:31.533 ==> default: -- Video Type: cirrus 00:01:31.533 ==> default: -- Video VRAM: 9216 00:01:31.533 ==> default: -- Sound Type: 00:01:31.533 ==> default: -- Keymap: en-us 00:01:31.533 ==> default: -- TPM Path: 00:01:31.533 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:31.533 ==> default: -- Command line args: 00:01:31.533 ==> default: -> value=-device, 00:01:31.533 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:31.533 ==> default: -> value=-drive, 00:01:31.533 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme.img,if=none,id=nvme-0-drive0, 00:01:31.533 ==> default: -> value=-device, 00:01:31.533 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:31.791 ==> default: Creating shared folders metadata... 00:01:31.791 ==> default: Starting domain. 00:01:33.692 ==> default: Waiting for domain to get an IP address... 00:01:45.899 ==> default: Waiting for SSH to become available... 00:01:47.802 ==> default: Configuring and enabling network interfaces... 00:01:51.149 default: SSH address: 192.168.121.29:22 00:01:51.149 default: SSH username: vagrant 00:01:51.149 default: SSH auth method: private key 00:01:52.083 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:02.060 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:06.250 ==> default: Mounting SSHFS shared folder... 00:02:07.186 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/centos7-libvirt/output => /home/vagrant/spdk_repo/output 00:02:07.186 ==> default: Checking Mount.. 00:02:07.752 ==> default: Folder Successfully Mounted! 00:02:07.752 ==> default: Running provisioner: file... 00:02:08.318 default: ~/.gitconfig => .gitconfig 00:02:08.575 00:02:08.575 SUCCESS! 00:02:08.575 00:02:08.575 cd to /var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt and type "vagrant ssh" to use. 00:02:08.575 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:08.575 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt" to destroy all trace of vm. 00:02:08.575 00:02:08.582 [Pipeline] } 00:02:08.594 [Pipeline] // stage 00:02:08.600 [Pipeline] dir 00:02:08.601 Running in /var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt 00:02:08.602 [Pipeline] { 00:02:08.613 [Pipeline] catchError 00:02:08.614 [Pipeline] { 00:02:08.626 [Pipeline] sh 00:02:08.906 + vagrant ssh-config --host vagrant 00:02:08.906 + sed -ne /^Host/,$p 00:02:08.906 + tee ssh_conf 00:02:12.187 Host vagrant 00:02:12.187 HostName 192.168.121.29 00:02:12.187 User vagrant 00:02:12.187 Port 22 00:02:12.187 UserKnownHostsFile /dev/null 00:02:12.187 StrictHostKeyChecking no 00:02:12.187 PasswordAuthentication no 00:02:12.187 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-centos7/7.8.2003-1711172311-2200/libvirt/centos7 00:02:12.187 IdentitiesOnly yes 00:02:12.187 LogLevel FATAL 00:02:12.187 ForwardAgent yes 00:02:12.187 ForwardX11 yes 00:02:12.187 00:02:12.200 [Pipeline] withEnv 00:02:12.202 [Pipeline] { 00:02:12.217 [Pipeline] sh 00:02:12.551 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:12.551 source /etc/os-release 00:02:12.551 [[ -e /image.version ]] && img=$(< /image.version) 00:02:12.551 # Minimal, systemd-like check. 00:02:12.551 if [[ -e /.dockerenv ]]; then 00:02:12.551 # Clear garbage from the node's name: 00:02:12.551 # agt-er_autotest_547-896 -> autotest_547-896 00:02:12.551 # $HOSTNAME is the actual container id 00:02:12.551 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:12.551 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:12.551 # We can assume this is a mount from a host where container is running, 00:02:12.551 # so fetch its hostname to easily identify the target swarm worker. 00:02:12.551 container="$(< /etc/hostname) ($agent)" 00:02:12.551 else 00:02:12.551 # Fallback 00:02:12.551 container=$agent 00:02:12.551 fi 00:02:12.551 fi 00:02:12.551 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:12.551 00:02:12.561 [Pipeline] } 00:02:12.580 [Pipeline] // withEnv 00:02:12.587 [Pipeline] setCustomBuildProperty 00:02:12.601 [Pipeline] stage 00:02:12.603 [Pipeline] { (Tests) 00:02:12.620 [Pipeline] sh 00:02:12.894 + scp -F ssh_conf -r /var/jenkins/workspace/centos7-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:12.906 [Pipeline] sh 00:02:13.184 + scp -F ssh_conf -r /var/jenkins/workspace/centos7-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:13.196 [Pipeline] timeout 00:02:13.196 Timeout set to expire in 1 hr 30 min 00:02:13.197 [Pipeline] { 00:02:13.210 [Pipeline] sh 00:02:13.492 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:13.772 HEAD is now at 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:02:13.785 [Pipeline] sh 00:02:14.063 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:14.063 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:02:14.078 [Pipeline] sh 00:02:14.359 + scp -F ssh_conf -r /var/jenkins/workspace/centos7-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:14.376 [Pipeline] sh 00:02:14.660 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=centos7-vg-autotest ./autoruner.sh spdk_repo 00:02:14.660 ++ readlink -f spdk_repo 00:02:14.660 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:14.660 + [[ -n /home/vagrant/spdk_repo ]] 00:02:14.660 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:14.660 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:14.660 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:14.660 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:14.660 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:14.660 + [[ centos7-vg-autotest == pkgdep-* ]] 00:02:14.660 + cd /home/vagrant/spdk_repo 00:02:14.660 + source /etc/os-release 00:02:14.660 ++ NAME='CentOS Linux' 00:02:14.660 ++ VERSION='7 (Core)' 00:02:14.660 ++ ID=centos 00:02:14.660 ++ ID_LIKE='rhel fedora' 00:02:14.660 ++ VERSION_ID=7 00:02:14.660 ++ PRETTY_NAME='CentOS Linux 7 (Core)' 00:02:14.660 ++ ANSI_COLOR='0;31' 00:02:14.660 ++ CPE_NAME=cpe:/o:centos:centos:7 00:02:14.660 ++ HOME_URL=https://www.centos.org/ 00:02:14.660 ++ BUG_REPORT_URL=https://bugs.centos.org/ 00:02:14.660 ++ CENTOS_MANTISBT_PROJECT=CentOS-7 00:02:14.660 ++ CENTOS_MANTISBT_PROJECT_VERSION=7 00:02:14.660 ++ REDHAT_SUPPORT_PRODUCT=centos 00:02:14.660 ++ REDHAT_SUPPORT_PRODUCT_VERSION=7 00:02:14.660 + uname -a 00:02:14.660 Linux centos7-cloud-1711172311-2200 3.10.0-1160.114.2.el7.x86_64 #1 SMP Wed Mar 20 15:54:52 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:02:14.661 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:14.661 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:02:14.920 Hugepages 00:02:14.920 node hugesize free / total 00:02:14.920 node0 1048576kB 0 / 0 00:02:14.920 node0 2048kB 0 / 0 00:02:14.920 00:02:14.920 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:14.920 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:14.920 NVMe 0000:00:06.0 1b36 0010 0 nvme nvme0 nvme0n1 00:02:14.920 + rm -f /tmp/spdk-ld-path 00:02:14.920 + source autorun-spdk.conf 00:02:14.920 ++ SPDK_TEST_UNITTEST=1 00:02:14.920 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.920 ++ SPDK_TEST_BLOCKDEV=1 00:02:14.920 ++ SPDK_TEST_DAOS=1 00:02:14.920 ++ SPDK_RUN_ASAN=1 00:02:14.920 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:14.920 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:14.920 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:14.920 ++ RUN_NIGHTLY=1 00:02:14.920 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:14.920 + [[ -n '' ]] 00:02:14.920 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:14.920 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:02:14.920 + for M in /var/spdk/build-*-manifest.txt 00:02:14.920 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:14.920 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:14.920 + for M in /var/spdk/build-*-manifest.txt 00:02:14.920 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:14.920 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:14.920 ++ uname 00:02:14.920 + [[ Linux == \L\i\n\u\x ]] 00:02:14.920 + sudo dmesg -T 00:02:14.920 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:02:14.920 + sudo dmesg --clear 00:02:15.180 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:02:15.180 + dmesg_pid=2961 00:02:15.180 + [[ CentOS Linux == FreeBSD ]] 00:02:15.180 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:15.180 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:15.180 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:15.180 + sudo dmesg -Tw 00:02:15.180 + [[ -x /usr/src/fio-static/fio ]] 00:02:15.180 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:15.180 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:15.180 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:15.180 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:02:15.180 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:15.180 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:15.180 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:15.180 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:15.180 Test configuration: 00:02:15.180 SPDK_TEST_UNITTEST=1 00:02:15.180 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:15.180 SPDK_TEST_BLOCKDEV=1 00:02:15.180 SPDK_TEST_DAOS=1 00:02:15.180 SPDK_RUN_ASAN=1 00:02:15.180 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:15.180 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:15.180 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:15.180 RUN_NIGHTLY=1 07:48:20 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:15.180 07:48:20 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:15.180 07:48:20 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:15.180 07:48:20 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:15.180 07:48:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:02:15.180 07:48:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:02:15.180 07:48:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:02:15.180 07:48:20 -- paths/export.sh@5 -- $ export PATH 00:02:15.180 07:48:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:02:15.180 07:48:20 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:15.180 07:48:20 -- common/autobuild_common.sh@435 -- $ date +%s 00:02:15.180 07:48:20 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720856900.XXXXXX 00:02:15.180 07:48:20 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720856900.OyU81b 00:02:15.180 07:48:20 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:02:15.180 07:48:20 -- common/autobuild_common.sh@441 -- $ '[' -n v22.11.4 ']' 00:02:15.180 07:48:20 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:15.180 07:48:20 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:15.180 07:48:20 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:15.180 07:48:20 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:15.180 07:48:20 -- common/autobuild_common.sh@451 -- $ get_config_params 00:02:15.180 07:48:20 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:15.180 07:48:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.180 07:48:20 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-asan --enable-coverage --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-daos' 00:02:15.180 07:48:20 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:15.180 07:48:20 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:15.180 07:48:20 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:15.180 07:48:20 -- spdk/autobuild.sh@16 -- $ date -u 00:02:15.180 Sat Jul 13 07:48:20 UTC 2024 00:02:15.180 07:48:20 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:15.180 LTS-59-g4b94202c6 00:02:15.180 07:48:20 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:15.180 07:48:20 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:15.180 07:48:20 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:15.180 07:48:20 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:15.180 07:48:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.180 ************************************ 00:02:15.180 START TEST asan 00:02:15.180 ************************************ 00:02:15.180 using asan 00:02:15.180 ************************************ 00:02:15.180 END TEST asan 00:02:15.180 ************************************ 00:02:15.180 07:48:20 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:02:15.180 00:02:15.180 real 0m0.000s 00:02:15.180 user 0m0.000s 00:02:15.180 sys 0m0.000s 00:02:15.180 07:48:20 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:15.180 07:48:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.180 07:48:20 -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']' 00:02:15.180 07:48:20 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:15.180 07:48:20 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:15.180 07:48:20 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:15.180 07:48:20 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:02:15.180 07:48:20 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:15.180 07:48:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.180 ************************************ 00:02:15.180 START TEST build_native_dpdk 00:02:15.180 ************************************ 00:02:15.180 07:48:20 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:02:15.180 07:48:20 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:15.180 07:48:20 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:15.180 07:48:20 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:15.180 07:48:20 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:15.180 07:48:20 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:15.180 07:48:20 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:15.180 07:48:20 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:15.180 07:48:20 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:15.180 07:48:20 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:15.180 07:48:20 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:15.180 07:48:20 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:15.180 07:48:20 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:15.180 07:48:20 -- common/autobuild_common.sh@68 -- $ compiler_version=10 00:02:15.180 07:48:20 -- common/autobuild_common.sh@69 -- $ compiler_version=10 00:02:15.180 07:48:20 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:15.180 07:48:20 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:15.180 07:48:20 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:15.180 07:48:20 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:15.180 07:48:20 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:15.180 07:48:20 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:15.180 caf0f5d395 version: 22.11.4 00:02:15.180 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:15.180 dc9c799c7d vhost: fix missing spinlock unlock 00:02:15.180 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:15.180 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:15.180 07:48:20 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:15.180 07:48:20 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:15.180 07:48:20 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:15.180 07:48:20 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:15.180 07:48:20 -- common/autobuild_common.sh@89 -- $ [[ 10 -ge 5 ]] 00:02:15.180 07:48:20 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:15.180 07:48:20 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:15.180 07:48:20 -- common/autobuild_common.sh@93 -- $ [[ 10 -ge 10 ]] 00:02:15.180 07:48:20 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:15.180 07:48:20 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:15.180 07:48:20 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:15.180 07:48:20 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:15.180 07:48:20 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:15.180 07:48:20 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:15.180 07:48:20 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:15.180 07:48:20 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:15.180 07:48:20 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:15.180 07:48:20 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:15.180 07:48:20 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:15.180 07:48:20 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:15.180 07:48:20 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:15.181 07:48:20 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:15.181 07:48:20 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:15.181 07:48:20 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:15.181 07:48:20 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:15.181 07:48:20 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:15.181 07:48:20 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:15.181 07:48:20 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:15.181 07:48:20 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:15.181 07:48:20 -- scripts/common.sh@343 -- $ case "$op" in 00:02:15.181 07:48:20 -- scripts/common.sh@344 -- $ : 1 00:02:15.181 07:48:20 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:15.181 07:48:20 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:15.181 07:48:20 -- scripts/common.sh@364 -- $ decimal 22 00:02:15.181 07:48:20 -- scripts/common.sh@352 -- $ local d=22 00:02:15.440 07:48:20 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:15.440 07:48:20 -- scripts/common.sh@354 -- $ echo 22 00:02:15.440 07:48:20 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:15.440 07:48:20 -- scripts/common.sh@365 -- $ decimal 21 00:02:15.440 07:48:20 -- scripts/common.sh@352 -- $ local d=21 00:02:15.440 07:48:20 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:15.440 07:48:20 -- scripts/common.sh@354 -- $ echo 21 00:02:15.440 07:48:20 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:15.440 07:48:20 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:15.440 07:48:20 -- scripts/common.sh@366 -- $ return 1 00:02:15.440 07:48:20 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:15.440 patching file config/rte_config.h 00:02:15.440 Hunk #1 succeeded at 60 (offset 1 line). 00:02:15.440 07:48:20 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:15.440 07:48:21 -- common/autobuild_common.sh@178 -- $ uname -s 00:02:15.440 07:48:21 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:15.440 07:48:21 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:15.440 07:48:21 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:19.629 The Meson build system 00:02:19.629 Version: 0.61.5 00:02:19.629 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:19.629 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:19.629 Build type: native build 00:02:19.629 Program cat found: YES (/bin/cat) 00:02:19.629 Project name: DPDK 00:02:19.629 Project version: 22.11.4 00:02:19.629 C compiler for the host machine: gcc (gcc 10.2.1 "gcc (GCC) 10.2.1 20210130 (Red Hat 10.2.1-11)") 00:02:19.629 C linker for the host machine: gcc ld.bfd 2.35-5 00:02:19.629 Host machine cpu family: x86_64 00:02:19.629 Host machine cpu: x86_64 00:02:19.629 Message: ## Building in Developer Mode ## 00:02:19.629 Program pkg-config found: YES (/bin/pkg-config) 00:02:19.629 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:19.629 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:19.629 Program objdump found: YES (/bin/objdump) 00:02:19.629 Program python3 found: YES (/usr/bin/python3) 00:02:19.629 Program cat found: YES (/bin/cat) 00:02:19.629 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:19.629 Checking for size of "void *" : 8 00:02:19.629 Checking for size of "void *" : 8 00:02:19.629 Library m found: YES 00:02:19.629 Library numa found: YES 00:02:19.629 Has header "numaif.h" : YES 00:02:19.629 Library fdt found: NO 00:02:19.629 Library execinfo found: NO 00:02:19.629 Has header "execinfo.h" : YES 00:02:19.629 Found pkg-config: /bin/pkg-config (0.27.1) 00:02:19.629 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:19.629 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:19.629 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:19.629 Run-time dependency openssl found: YES 1.0.2k 00:02:19.629 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:19.629 Library pcap found: NO 00:02:19.629 Compiler for C supports arguments -Wcast-qual: YES 00:02:19.629 Compiler for C supports arguments -Wdeprecated: YES 00:02:19.629 Compiler for C supports arguments -Wformat: YES 00:02:19.629 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:19.629 Compiler for C supports arguments -Wformat-security: NO 00:02:19.629 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:19.629 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:19.629 Compiler for C supports arguments -Wnested-externs: YES 00:02:19.629 Compiler for C supports arguments -Wold-style-definition: YES 00:02:19.629 Compiler for C supports arguments -Wpointer-arith: YES 00:02:19.629 Compiler for C supports arguments -Wsign-compare: YES 00:02:19.629 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:19.629 Compiler for C supports arguments -Wundef: YES 00:02:19.629 Compiler for C supports arguments -Wwrite-strings: YES 00:02:19.629 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:19.629 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:19.629 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:19.629 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:19.629 Compiler for C supports arguments -mavx512f: YES 00:02:19.629 Checking if "AVX512 checking" compiles: YES 00:02:19.629 Fetching value of define "__SSE4_2__" : 1 00:02:19.629 Fetching value of define "__AES__" : 1 00:02:19.629 Fetching value of define "__AVX__" : 1 00:02:19.629 Fetching value of define "__AVX2__" : 1 00:02:19.629 Fetching value of define "__AVX512BW__" : 1 00:02:19.629 Fetching value of define "__AVX512CD__" : 1 00:02:19.629 Fetching value of define "__AVX512DQ__" : 1 00:02:19.629 Fetching value of define "__AVX512F__" : 1 00:02:19.629 Fetching value of define "__AVX512VL__" : 1 00:02:19.629 Fetching value of define "__PCLMUL__" : 1 00:02:19.629 Fetching value of define "__RDRND__" : 1 00:02:19.629 Fetching value of define "__RDSEED__" : 1 00:02:19.629 Fetching value of define "__VPCLMULQDQ__" : 00:02:19.629 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:19.629 Message: lib/kvargs: Defining dependency "kvargs" 00:02:19.629 Message: lib/telemetry: Defining dependency "telemetry" 00:02:19.629 Checking for function "getentropy" : NO 00:02:19.629 Message: lib/eal: Defining dependency "eal" 00:02:19.629 Message: lib/ring: Defining dependency "ring" 00:02:19.629 Message: lib/rcu: Defining dependency "rcu" 00:02:19.629 Message: lib/mempool: Defining dependency "mempool" 00:02:19.629 Message: lib/mbuf: Defining dependency "mbuf" 00:02:19.629 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:19.629 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:19.629 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:19.629 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:19.629 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:19.629 Fetching value of define "__VPCLMULQDQ__" : (cached) 00:02:19.629 Compiler for C supports arguments -mpclmul: YES 00:02:19.629 Compiler for C supports arguments -maes: YES 00:02:21.019 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:21.019 Compiler for C supports arguments -mavx512bw: YES 00:02:21.019 Compiler for C supports arguments -mavx512dq: YES 00:02:21.019 Compiler for C supports arguments -mavx512vl: YES 00:02:21.019 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:21.019 Compiler for C supports arguments -mavx2: YES 00:02:21.019 Compiler for C supports arguments -mavx: YES 00:02:21.019 Message: lib/net: Defining dependency "net" 00:02:21.019 Message: lib/meter: Defining dependency "meter" 00:02:21.019 Message: lib/ethdev: Defining dependency "ethdev" 00:02:21.019 Message: lib/pci: Defining dependency "pci" 00:02:21.019 Message: lib/cmdline: Defining dependency "cmdline" 00:02:21.019 Message: lib/metrics: Defining dependency "metrics" 00:02:21.019 Message: lib/hash: Defining dependency "hash" 00:02:21.019 Message: lib/timer: Defining dependency "timer" 00:02:21.019 Fetching value of define "__AVX2__" : 1 (cached) 00:02:21.019 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:21.019 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:21.019 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:21.019 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:21.019 Message: lib/acl: Defining dependency "acl" 00:02:21.019 Message: lib/bbdev: Defining dependency "bbdev" 00:02:21.019 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:21.019 Run-time dependency libelf found: YES 0.176 00:02:21.019 lib/bpf/meson.build:43: WARNING: libpcap is missing, rte_bpf_convert API will be disabled 00:02:21.019 Message: lib/bpf: Defining dependency "bpf" 00:02:21.019 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:21.019 Message: lib/compressdev: Defining dependency "compressdev" 00:02:21.019 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:21.019 Message: lib/distributor: Defining dependency "distributor" 00:02:21.019 Message: lib/efd: Defining dependency "efd" 00:02:21.019 Message: lib/eventdev: Defining dependency "eventdev" 00:02:21.019 Message: lib/gpudev: Defining dependency "gpudev" 00:02:21.019 Message: lib/gro: Defining dependency "gro" 00:02:21.019 Message: lib/gso: Defining dependency "gso" 00:02:21.019 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:21.019 Message: lib/jobstats: Defining dependency "jobstats" 00:02:21.019 Message: lib/latencystats: Defining dependency "latencystats" 00:02:21.019 Message: lib/lpm: Defining dependency "lpm" 00:02:21.019 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:21.019 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:21.019 Fetching value of define "__AVX512IFMA__" : 00:02:21.019 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:21.019 Message: lib/member: Defining dependency "member" 00:02:21.019 Message: lib/pcapng: Defining dependency "pcapng" 00:02:21.019 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:21.019 Message: lib/power: Defining dependency "power" 00:02:21.019 Message: lib/rawdev: Defining dependency "rawdev" 00:02:21.019 Message: lib/regexdev: Defining dependency "regexdev" 00:02:21.019 Message: lib/dmadev: Defining dependency "dmadev" 00:02:21.019 Message: lib/rib: Defining dependency "rib" 00:02:21.019 Message: lib/reorder: Defining dependency "reorder" 00:02:21.019 Message: lib/sched: Defining dependency "sched" 00:02:21.019 Message: lib/security: Defining dependency "security" 00:02:21.019 Message: lib/stack: Defining dependency "stack" 00:02:21.019 Has header "linux/userfaultfd.h" : YES 00:02:21.019 Message: lib/vhost: Defining dependency "vhost" 00:02:21.019 Message: lib/ipsec: Defining dependency "ipsec" 00:02:21.019 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:21.019 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:21.019 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:21.019 Message: lib/fib: Defining dependency "fib" 00:02:21.019 Message: lib/port: Defining dependency "port" 00:02:21.019 Message: lib/pdump: Defining dependency "pdump" 00:02:21.019 Message: lib/table: Defining dependency "table" 00:02:21.019 Message: lib/pipeline: Defining dependency "pipeline" 00:02:21.019 Message: lib/graph: Defining dependency "graph" 00:02:21.019 Message: lib/node: Defining dependency "node" 00:02:21.019 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:21.019 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:21.019 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:21.019 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:21.019 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:21.019 Compiler for C supports arguments -Wno-unused-value: YES 00:02:21.019 Compiler for C supports arguments -Wno-format: YES 00:02:21.019 Compiler for C supports arguments -Wno-format-security: YES 00:02:21.019 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:21.019 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:21.953 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:21.953 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:21.953 Fetching value of define "__AVX2__" : 1 (cached) 00:02:21.953 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:21.953 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:21.953 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:21.953 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:21.953 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:21.953 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:21.953 Program doxygen found: YES (/bin/doxygen) 00:02:21.953 Configuring doxy-api.conf using configuration 00:02:21.953 Program sphinx-build found: NO 00:02:21.953 Configuring rte_build_config.h using configuration 00:02:21.953 Message: 00:02:21.953 ================= 00:02:21.953 Applications Enabled 00:02:21.953 ================= 00:02:21.953 00:02:21.953 apps: 00:02:21.953 pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, test-eventdev, 00:02:21.953 test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, test-security-perf, 00:02:21.953 00:02:21.953 00:02:21.953 Message: 00:02:21.953 ================= 00:02:21.953 Libraries Enabled 00:02:21.953 ================= 00:02:21.953 00:02:21.953 libs: 00:02:21.953 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:21.953 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:21.953 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:21.953 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:21.953 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:21.954 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:21.954 table, pipeline, graph, node, 00:02:21.954 00:02:21.954 Message: 00:02:21.954 =============== 00:02:21.954 Drivers Enabled 00:02:21.954 =============== 00:02:21.954 00:02:21.954 common: 00:02:21.954 00:02:21.954 bus: 00:02:21.954 pci, vdev, 00:02:21.954 mempool: 00:02:21.954 ring, 00:02:21.954 dma: 00:02:21.954 00:02:21.954 net: 00:02:21.954 i40e, 00:02:21.954 raw: 00:02:21.954 00:02:21.954 crypto: 00:02:21.954 00:02:21.954 compress: 00:02:21.954 00:02:21.954 regex: 00:02:21.954 00:02:21.954 vdpa: 00:02:21.954 00:02:21.954 event: 00:02:21.954 00:02:21.954 baseband: 00:02:21.954 00:02:21.954 gpu: 00:02:21.954 00:02:21.954 00:02:21.954 Message: 00:02:21.954 ================= 00:02:21.954 Content Skipped 00:02:21.954 ================= 00:02:21.954 00:02:21.954 apps: 00:02:21.954 dumpcap: missing dependency, "libpcap" 00:02:21.954 00:02:21.954 libs: 00:02:21.954 kni: explicitly disabled via build config (deprecated lib) 00:02:21.954 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:21.954 00:02:21.954 drivers: 00:02:21.954 common/cpt: not in enabled drivers build config 00:02:21.954 common/dpaax: not in enabled drivers build config 00:02:21.954 common/iavf: not in enabled drivers build config 00:02:21.954 common/idpf: not in enabled drivers build config 00:02:21.954 common/mvep: not in enabled drivers build config 00:02:21.954 common/octeontx: not in enabled drivers build config 00:02:21.954 bus/auxiliary: not in enabled drivers build config 00:02:21.954 bus/dpaa: not in enabled drivers build config 00:02:21.954 bus/fslmc: not in enabled drivers build config 00:02:21.954 bus/ifpga: not in enabled drivers build config 00:02:21.954 bus/vmbus: not in enabled drivers build config 00:02:21.954 common/cnxk: not in enabled drivers build config 00:02:21.954 common/mlx5: not in enabled drivers build config 00:02:21.954 common/qat: not in enabled drivers build config 00:02:21.954 common/sfc_efx: not in enabled drivers build config 00:02:21.954 mempool/bucket: not in enabled drivers build config 00:02:21.954 mempool/cnxk: not in enabled drivers build config 00:02:21.954 mempool/dpaa: not in enabled drivers build config 00:02:21.954 mempool/dpaa2: not in enabled drivers build config 00:02:21.954 mempool/octeontx: not in enabled drivers build config 00:02:21.954 mempool/stack: not in enabled drivers build config 00:02:21.954 dma/cnxk: not in enabled drivers build config 00:02:21.954 dma/dpaa: not in enabled drivers build config 00:02:21.954 dma/dpaa2: not in enabled drivers build config 00:02:21.954 dma/hisilicon: not in enabled drivers build config 00:02:21.954 dma/idxd: not in enabled drivers build config 00:02:21.954 dma/ioat: not in enabled drivers build config 00:02:21.954 dma/skeleton: not in enabled drivers build config 00:02:21.954 net/af_packet: not in enabled drivers build config 00:02:21.954 net/af_xdp: not in enabled drivers build config 00:02:21.954 net/ark: not in enabled drivers build config 00:02:21.954 net/atlantic: not in enabled drivers build config 00:02:21.954 net/avp: not in enabled drivers build config 00:02:21.954 net/axgbe: not in enabled drivers build config 00:02:21.954 net/bnx2x: not in enabled drivers build config 00:02:21.954 net/bnxt: not in enabled drivers build config 00:02:21.954 net/bonding: not in enabled drivers build config 00:02:21.954 net/cnxk: not in enabled drivers build config 00:02:21.954 net/cxgbe: not in enabled drivers build config 00:02:21.954 net/dpaa: not in enabled drivers build config 00:02:21.954 net/dpaa2: not in enabled drivers build config 00:02:21.954 net/e1000: not in enabled drivers build config 00:02:21.954 net/ena: not in enabled drivers build config 00:02:21.954 net/enetc: not in enabled drivers build config 00:02:21.954 net/enetfec: not in enabled drivers build config 00:02:21.954 net/enic: not in enabled drivers build config 00:02:21.954 net/failsafe: not in enabled drivers build config 00:02:21.954 net/fm10k: not in enabled drivers build config 00:02:21.954 net/gve: not in enabled drivers build config 00:02:21.954 net/hinic: not in enabled drivers build config 00:02:21.954 net/hns3: not in enabled drivers build config 00:02:21.954 net/iavf: not in enabled drivers build config 00:02:21.954 net/ice: not in enabled drivers build config 00:02:21.954 net/idpf: not in enabled drivers build config 00:02:21.954 net/igc: not in enabled drivers build config 00:02:21.954 net/ionic: not in enabled drivers build config 00:02:21.954 net/ipn3ke: not in enabled drivers build config 00:02:21.954 net/ixgbe: not in enabled drivers build config 00:02:21.954 net/kni: not in enabled drivers build config 00:02:21.954 net/liquidio: not in enabled drivers build config 00:02:21.954 net/mana: not in enabled drivers build config 00:02:21.954 net/memif: not in enabled drivers build config 00:02:21.954 net/mlx4: not in enabled drivers build config 00:02:21.954 net/mlx5: not in enabled drivers build config 00:02:21.954 net/mvneta: not in enabled drivers build config 00:02:21.954 net/mvpp2: not in enabled drivers build config 00:02:21.954 net/netvsc: not in enabled drivers build config 00:02:21.954 net/nfb: not in enabled drivers build config 00:02:21.954 net/nfp: not in enabled drivers build config 00:02:21.954 net/ngbe: not in enabled drivers build config 00:02:21.954 net/null: not in enabled drivers build config 00:02:21.954 net/octeontx: not in enabled drivers build config 00:02:21.954 net/octeon_ep: not in enabled drivers build config 00:02:21.954 net/pcap: not in enabled drivers build config 00:02:21.954 net/pfe: not in enabled drivers build config 00:02:21.954 net/qede: not in enabled drivers build config 00:02:21.954 net/ring: not in enabled drivers build config 00:02:21.954 net/sfc: not in enabled drivers build config 00:02:21.954 net/softnic: not in enabled drivers build config 00:02:21.954 net/tap: not in enabled drivers build config 00:02:21.954 net/thunderx: not in enabled drivers build config 00:02:21.954 net/txgbe: not in enabled drivers build config 00:02:21.954 net/vdev_netvsc: not in enabled drivers build config 00:02:21.954 net/vhost: not in enabled drivers build config 00:02:21.954 net/virtio: not in enabled drivers build config 00:02:21.954 net/vmxnet3: not in enabled drivers build config 00:02:21.954 raw/cnxk_bphy: not in enabled drivers build config 00:02:21.954 raw/cnxk_gpio: not in enabled drivers build config 00:02:21.954 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:21.954 raw/ifpga: not in enabled drivers build config 00:02:21.954 raw/ntb: not in enabled drivers build config 00:02:21.954 raw/skeleton: not in enabled drivers build config 00:02:21.954 crypto/armv8: not in enabled drivers build config 00:02:21.954 crypto/bcmfs: not in enabled drivers build config 00:02:21.954 crypto/caam_jr: not in enabled drivers build config 00:02:21.954 crypto/ccp: not in enabled drivers build config 00:02:21.954 crypto/cnxk: not in enabled drivers build config 00:02:21.954 crypto/dpaa_sec: not in enabled drivers build config 00:02:21.954 crypto/dpaa2_sec: not in enabled drivers build config 00:02:21.954 crypto/ipsec_mb: not in enabled drivers build config 00:02:21.954 crypto/mlx5: not in enabled drivers build config 00:02:21.955 crypto/mvsam: not in enabled drivers build config 00:02:21.955 crypto/nitrox: not in enabled drivers build config 00:02:21.955 crypto/null: not in enabled drivers build config 00:02:21.955 crypto/octeontx: not in enabled drivers build config 00:02:21.955 crypto/openssl: not in enabled drivers build config 00:02:21.955 crypto/scheduler: not in enabled drivers build config 00:02:21.955 crypto/uadk: not in enabled drivers build config 00:02:21.955 crypto/virtio: not in enabled drivers build config 00:02:21.955 compress/isal: not in enabled drivers build config 00:02:21.955 compress/mlx5: not in enabled drivers build config 00:02:21.955 compress/octeontx: not in enabled drivers build config 00:02:21.955 compress/zlib: not in enabled drivers build config 00:02:21.955 regex/mlx5: not in enabled drivers build config 00:02:21.955 regex/cn9k: not in enabled drivers build config 00:02:21.955 vdpa/ifc: not in enabled drivers build config 00:02:21.955 vdpa/mlx5: not in enabled drivers build config 00:02:21.955 vdpa/sfc: not in enabled drivers build config 00:02:21.955 event/cnxk: not in enabled drivers build config 00:02:21.955 event/dlb2: not in enabled drivers build config 00:02:21.955 event/dpaa: not in enabled drivers build config 00:02:21.955 event/dpaa2: not in enabled drivers build config 00:02:21.955 event/dsw: not in enabled drivers build config 00:02:21.955 event/opdl: not in enabled drivers build config 00:02:21.955 event/skeleton: not in enabled drivers build config 00:02:21.955 event/sw: not in enabled drivers build config 00:02:21.955 event/octeontx: not in enabled drivers build config 00:02:21.955 baseband/acc: not in enabled drivers build config 00:02:21.955 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:21.955 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:21.955 baseband/la12xx: not in enabled drivers build config 00:02:21.955 baseband/null: not in enabled drivers build config 00:02:21.955 baseband/turbo_sw: not in enabled drivers build config 00:02:21.955 gpu/cuda: not in enabled drivers build config 00:02:21.955 00:02:21.955 00:02:22.890 Build targets in project: 310 00:02:22.890 00:02:22.890 DPDK 22.11.4 00:02:22.890 00:02:22.890 User defined options 00:02:22.890 libdir : lib 00:02:22.890 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:22.890 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:22.890 c_link_args : 00:02:22.890 enable_docs : false 00:02:22.890 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:22.890 enable_kmods : false 00:02:22.890 machine : native 00:02:22.890 tests : false 00:02:22.890 00:02:22.890 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:22.890 NOTICE: You are using Python 3.6 which is EOL. Starting with v0.62.0, Meson will require Python 3.7 or newer 00:02:23.149 07:48:28 -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:23.149 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:23.149 [1/737] Generating lib/rte_telemetry_def with a custom command 00:02:23.149 [2/737] Generating lib/rte_telemetry_mingw with a custom command 00:02:23.149 [3/737] Generating lib/rte_kvargs_mingw with a custom command 00:02:23.149 [4/737] Generating lib/rte_kvargs_def with a custom command 00:02:23.149 [5/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:23.149 [6/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:23.149 [7/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:23.411 [8/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:23.411 [9/737] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:23.411 [10/737] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:23.411 [11/737] Linking static target lib/librte_kvargs.a 00:02:23.411 [12/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:23.411 [13/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:23.411 [14/737] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:23.411 [15/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:23.411 [16/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:23.411 [17/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:23.716 [18/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:23.716 [19/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:23.716 [20/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:23.716 [21/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:23.716 [22/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:23.716 [23/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:23.716 [24/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:23.716 [25/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:23.716 [26/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:23.716 [27/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:23.716 [28/737] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:23.716 [29/737] Linking static target lib/librte_telemetry.a 00:02:23.716 [30/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:23.716 [31/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:23.716 [32/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:23.716 [33/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:23.975 [34/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:23.975 [35/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:23.975 [36/737] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:23.975 [37/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:23.975 [38/737] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.975 [39/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:23.975 [40/737] Linking target lib/librte_kvargs.so.23.0 00:02:23.975 [41/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:23.975 [42/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:24.234 [43/737] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:24.234 [44/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:24.234 [45/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:24.234 [46/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:24.234 [47/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:24.234 [48/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:24.234 [49/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:24.234 [50/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:24.494 [51/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:24.494 [52/737] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:24.494 [53/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:24.494 [54/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:24.494 [55/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:24.494 [56/737] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:24.494 [57/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:24.494 [58/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:24.494 [59/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:24.494 [60/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:24.494 [61/737] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:24.494 [62/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:24.494 [63/737] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.494 [64/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:24.494 [65/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:24.494 [66/737] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:24.494 [67/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:24.494 [68/737] Linking target lib/librte_telemetry.so.23.0 00:02:24.753 [69/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:24.753 [70/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:24.753 [71/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:24.753 [72/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:24.753 [73/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:24.753 [74/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:24.753 [75/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:24.753 [76/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:24.753 [77/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:24.753 [78/737] Generating lib/rte_eal_def with a custom command 00:02:24.753 [79/737] Generating lib/rte_ring_def with a custom command 00:02:24.753 [80/737] Generating lib/rte_eal_mingw with a custom command 00:02:24.753 [81/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:24.753 [82/737] Generating lib/rte_ring_mingw with a custom command 00:02:24.753 [83/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:24.753 [84/737] Generating lib/rte_rcu_def with a custom command 00:02:24.753 [85/737] Generating lib/rte_rcu_mingw with a custom command 00:02:25.013 [86/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:25.013 [87/737] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:25.013 [88/737] Linking static target lib/librte_ring.a 00:02:25.013 [89/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:25.013 [90/737] Generating lib/rte_mempool_def with a custom command 00:02:25.013 [91/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:25.013 [92/737] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:25.013 [93/737] Generating lib/rte_mempool_mingw with a custom command 00:02:25.272 [94/737] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:25.272 [95/737] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:25.272 [96/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:25.272 [97/737] Generating lib/rte_mbuf_def with a custom command 00:02:25.272 [98/737] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:25.272 [99/737] Linking static target lib/librte_eal.a 00:02:25.272 [100/737] Generating lib/rte_mbuf_mingw with a custom command 00:02:25.272 [101/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:25.272 [102/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:25.531 [103/737] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:25.531 [104/737] Linking static target lib/librte_mempool.a 00:02:25.531 [105/737] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:25.531 [106/737] Linking static target lib/librte_rcu.a 00:02:25.531 [107/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:25.789 [108/737] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.789 [109/737] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:25.789 [110/737] Generating lib/rte_net_def with a custom command 00:02:25.789 [111/737] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:25.789 [112/737] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:25.789 [113/737] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:25.789 [114/737] Generating lib/rte_net_mingw with a custom command 00:02:25.789 [115/737] Generating lib/rte_meter_def with a custom command 00:02:25.789 [116/737] Generating lib/rte_meter_mingw with a custom command 00:02:25.789 [117/737] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:25.789 [118/737] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:25.789 [119/737] Linking static target lib/librte_meter.a 00:02:25.789 [120/737] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:26.049 [121/737] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:26.049 [122/737] Linking static target lib/librte_net.a 00:02:26.049 [123/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:26.307 [124/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:26.307 [125/737] Linking static target lib/librte_mbuf.a 00:02:26.307 [126/737] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.307 [127/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:26.307 [128/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:26.307 [129/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:26.566 [130/737] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.566 [131/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:26.566 [132/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:26.566 [133/737] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.825 [134/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:26.825 [135/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:26.825 [136/737] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.825 [137/737] Generating lib/rte_ethdev_def with a custom command 00:02:26.825 [138/737] Generating lib/rte_ethdev_mingw with a custom command 00:02:27.083 [139/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:27.083 [140/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:27.083 [141/737] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:27.083 [142/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:27.083 [143/737] Linking static target lib/librte_pci.a 00:02:27.083 [144/737] Generating lib/rte_pci_mingw with a custom command 00:02:27.083 [145/737] Generating lib/rte_pci_def with a custom command 00:02:27.083 [146/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:27.083 [147/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:27.341 [148/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:27.341 [149/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:27.341 [150/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:27.341 [151/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:27.341 [152/737] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.598 [153/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:27.598 [154/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:27.599 [155/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:27.599 [156/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:27.599 [157/737] Generating lib/rte_cmdline_def with a custom command 00:02:27.599 [158/737] Generating lib/rte_cmdline_mingw with a custom command 00:02:27.599 [159/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:27.599 [160/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:27.599 [161/737] Generating lib/rte_metrics_def with a custom command 00:02:27.599 [162/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:27.599 [163/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:27.599 [164/737] Generating lib/rte_metrics_mingw with a custom command 00:02:27.599 [165/737] Generating lib/rte_hash_def with a custom command 00:02:27.599 [166/737] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:27.599 [167/737] Generating lib/rte_hash_mingw with a custom command 00:02:27.857 [168/737] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.857 [169/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:27.857 [170/737] Generating lib/rte_timer_def with a custom command 00:02:27.857 [171/737] Generating lib/rte_timer_mingw with a custom command 00:02:27.857 [172/737] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:27.857 [173/737] Linking static target lib/librte_cmdline.a 00:02:27.857 [174/737] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:27.857 [175/737] Linking static target lib/librte_metrics.a 00:02:28.116 [176/737] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:28.116 [177/737] Linking static target lib/librte_timer.a 00:02:28.374 [178/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:28.374 [179/737] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:28.374 [180/737] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:28.632 [181/737] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:28.632 [182/737] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:28.632 [183/737] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.632 [184/737] Generating lib/rte_acl_def with a custom command 00:02:28.632 [185/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:28.632 [186/737] Linking static target lib/librte_ethdev.a 00:02:28.890 [187/737] Generating lib/rte_acl_mingw with a custom command 00:02:28.890 [188/737] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:28.890 [189/737] Generating lib/rte_bbdev_def with a custom command 00:02:28.890 [190/737] Generating lib/rte_bbdev_mingw with a custom command 00:02:28.890 [191/737] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.890 [192/737] Generating lib/rte_bitratestats_def with a custom command 00:02:28.890 [193/737] Generating lib/rte_bitratestats_mingw with a custom command 00:02:29.149 [194/737] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.149 [195/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:29.149 [196/737] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:29.149 [197/737] Linking static target lib/librte_bitratestats.a 00:02:29.407 [198/737] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:29.407 [199/737] Linking static target lib/librte_bbdev.a 00:02:29.407 [200/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:29.666 [201/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:29.666 [202/737] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.666 [203/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:29.924 [204/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:29.924 [205/737] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:29.924 [206/737] Linking static target lib/librte_hash.a 00:02:30.183 [207/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:30.442 [208/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:30.442 [209/737] Generating lib/rte_bpf_def with a custom command 00:02:30.442 [210/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:30.442 [211/737] Generating lib/rte_bpf_mingw with a custom command 00:02:30.442 [212/737] Generating lib/rte_cfgfile_def with a custom command 00:02:30.442 [213/737] Generating lib/rte_cfgfile_mingw with a custom command 00:02:30.701 [214/737] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.701 [215/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:30.701 [216/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:30.701 [217/737] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:30.701 [218/737] Linking static target lib/librte_cfgfile.a 00:02:30.701 [219/737] Linking static target lib/librte_bpf.a 00:02:30.960 [220/737] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:30.960 [221/737] Generating lib/rte_compressdev_def with a custom command 00:02:30.960 [222/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:30.960 [223/737] Generating lib/rte_compressdev_mingw with a custom command 00:02:30.960 [224/737] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:30.960 [225/737] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:30.960 [226/737] Linking static target lib/librte_compressdev.a 00:02:31.219 [227/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:31.219 [228/737] Linking static target lib/librte_acl.a 00:02:31.219 [229/737] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:31.219 [230/737] Generating lib/rte_cryptodev_def with a custom command 00:02:31.219 [231/737] Generating lib/rte_cryptodev_mingw with a custom command 00:02:31.478 [232/737] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.478 [233/737] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:31.478 [234/737] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.478 [235/737] Generating lib/rte_distributor_def with a custom command 00:02:31.478 [236/737] Generating lib/rte_distributor_mingw with a custom command 00:02:31.478 [237/737] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.478 [238/737] Generating lib/rte_efd_def with a custom command 00:02:31.737 [239/737] Generating lib/rte_efd_mingw with a custom command 00:02:31.737 [240/737] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:31.737 [241/737] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:31.995 [242/737] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:31.995 [243/737] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:31.995 [244/737] Linking static target lib/librte_distributor.a 00:02:31.995 [245/737] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.995 [246/737] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:32.254 [247/737] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.254 [248/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:32.513 [249/737] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:32.513 [250/737] Linking static target lib/librte_efd.a 00:02:32.513 [251/737] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.772 [252/737] Generating lib/rte_eventdev_def with a custom command 00:02:32.772 [253/737] Generating lib/rte_eventdev_mingw with a custom command 00:02:32.772 [254/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:32.772 [255/737] Generating lib/rte_gpudev_def with a custom command 00:02:32.772 [256/737] Generating lib/rte_gpudev_mingw with a custom command 00:02:32.772 [257/737] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:32.772 [258/737] Linking static target lib/librte_cryptodev.a 00:02:33.056 [259/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:33.056 [260/737] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.315 [261/737] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:33.315 [262/737] Linking static target lib/librte_gpudev.a 00:02:33.315 [263/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:33.315 [264/737] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:33.315 [265/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:33.315 [266/737] Generating lib/rte_gro_def with a custom command 00:02:33.315 [267/737] Generating lib/rte_gro_mingw with a custom command 00:02:33.574 [268/737] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:33.574 [269/737] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:33.574 [270/737] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:33.831 [271/737] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:33.831 [272/737] Linking static target lib/librte_gro.a 00:02:33.831 [273/737] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:33.831 [274/737] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:34.090 [275/737] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:34.090 [276/737] Generating lib/rte_gso_def with a custom command 00:02:34.090 [277/737] Generating lib/rte_gso_mingw with a custom command 00:02:34.090 [278/737] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:34.090 [279/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:34.090 [280/737] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:34.090 [281/737] Linking static target lib/librte_eventdev.a 00:02:34.347 [282/737] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:34.348 [283/737] Linking static target lib/librte_gso.a 00:02:34.348 [284/737] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.348 [285/737] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.348 [286/737] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.348 [287/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:34.605 [288/737] Linking target lib/librte_eal.so.23.0 00:02:34.605 [289/737] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.605 [290/737] Generating lib/rte_ip_frag_def with a custom command 00:02:34.605 [291/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:34.605 [292/737] Generating lib/rte_ip_frag_mingw with a custom command 00:02:34.605 [293/737] Generating lib/rte_jobstats_def with a custom command 00:02:34.605 [294/737] Generating lib/rte_jobstats_mingw with a custom command 00:02:34.605 [295/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:34.605 [296/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:34.605 [297/737] Generating lib/rte_latencystats_def with a custom command 00:02:34.605 [298/737] Generating lib/rte_latencystats_mingw with a custom command 00:02:34.863 [299/737] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:34.863 [300/737] Linking static target lib/librte_jobstats.a 00:02:34.863 [301/737] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.863 [302/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:34.863 [303/737] Generating lib/rte_lpm_def with a custom command 00:02:34.863 [304/737] Generating lib/rte_lpm_mingw with a custom command 00:02:34.863 [305/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:34.863 [306/737] Linking static target lib/librte_ip_frag.a 00:02:35.122 [307/737] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:35.122 [308/737] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:35.122 [309/737] Linking static target lib/librte_latencystats.a 00:02:35.122 [310/737] Linking target lib/librte_meter.so.23.0 00:02:35.122 [311/737] Linking target lib/librte_ring.so.23.0 00:02:35.122 [312/737] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:35.122 [313/737] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:35.122 [314/737] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:35.122 [315/737] Linking target lib/librte_timer.so.23.0 00:02:35.380 [316/737] Linking target lib/librte_pci.so.23.0 00:02:35.380 [317/737] Linking target lib/librte_acl.so.23.0 00:02:35.638 [318/737] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.638 [319/737] Linking target lib/librte_cfgfile.so.23.0 00:02:35.638 [320/737] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:35.638 [321/737] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:35.638 [322/737] Linking static target lib/librte_lpm.a 00:02:35.638 [323/737] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:35.638 [324/737] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.638 [325/737] Linking target lib/librte_jobstats.so.23.0 00:02:35.638 [326/737] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:35.638 [327/737] Linking target lib/librte_rcu.so.23.0 00:02:35.638 [328/737] Linking target lib/librte_mempool.so.23.0 00:02:35.638 [329/737] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.896 [330/737] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:35.896 [331/737] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:35.896 [332/737] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:35.896 [333/737] Generating lib/rte_member_def with a custom command 00:02:35.896 [334/737] Generating lib/rte_member_mingw with a custom command 00:02:35.896 [335/737] Generating lib/rte_pcapng_def with a custom command 00:02:35.896 [336/737] Generating lib/rte_pcapng_mingw with a custom command 00:02:35.896 [337/737] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:36.155 [338/737] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:36.155 [339/737] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:36.155 [340/737] Linking target lib/librte_mbuf.so.23.0 00:02:36.155 [341/737] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:36.414 [342/737] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:36.414 [343/737] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.414 [344/737] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:36.414 [345/737] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.414 [346/737] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:36.414 [347/737] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:36.414 [348/737] Linking static target lib/librte_pcapng.a 00:02:36.414 [349/737] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:36.673 [350/737] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:36.673 [351/737] Generating lib/rte_power_def with a custom command 00:02:36.673 [352/737] Generating lib/rte_power_mingw with a custom command 00:02:36.673 [353/737] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:36.673 [354/737] Generating lib/rte_rawdev_def with a custom command 00:02:36.673 [355/737] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:36.673 [356/737] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:36.673 [357/737] Generating lib/rte_rawdev_mingw with a custom command 00:02:36.673 [358/737] Linking target lib/librte_bbdev.so.23.0 00:02:36.673 [359/737] Linking target lib/librte_net.so.23.0 00:02:36.673 [360/737] Linking target lib/librte_compressdev.so.23.0 00:02:36.673 [361/737] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:36.673 [362/737] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:36.932 [363/737] Linking target lib/librte_distributor.so.23.0 00:02:36.932 [364/737] Linking target lib/librte_gpudev.so.23.0 00:02:36.932 [365/737] Linking target lib/librte_cryptodev.so.23.0 00:02:36.932 [366/737] Generating lib/rte_regexdev_def with a custom command 00:02:36.932 [367/737] Generating lib/rte_regexdev_mingw with a custom command 00:02:36.932 [368/737] Generating lib/rte_dmadev_def with a custom command 00:02:36.932 [369/737] Generating lib/rte_dmadev_mingw with a custom command 00:02:36.932 [370/737] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:36.932 [371/737] Linking static target lib/librte_rawdev.a 00:02:36.932 [372/737] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:36.932 [373/737] Linking static target lib/librte_power.a 00:02:37.191 [374/737] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.191 [375/737] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:37.191 [376/737] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.191 [377/737] Linking static target lib/librte_regexdev.a 00:02:37.191 [378/737] Generating lib/rte_rib_def with a custom command 00:02:37.191 [379/737] Generating lib/rte_rib_mingw with a custom command 00:02:37.191 [380/737] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:37.191 [381/737] Linking static target lib/librte_dmadev.a 00:02:37.191 [382/737] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:37.191 [383/737] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:37.191 [384/737] Linking target lib/librte_ethdev.so.23.0 00:02:37.191 [385/737] Linking target lib/librte_cmdline.so.23.0 00:02:37.452 [386/737] Linking target lib/librte_hash.so.23.0 00:02:37.452 [387/737] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:37.452 [388/737] Linking static target lib/librte_member.a 00:02:37.452 [389/737] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:37.452 [390/737] Generating lib/rte_reorder_def with a custom command 00:02:37.452 [391/737] Generating lib/rte_reorder_mingw with a custom command 00:02:37.710 [392/737] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:37.710 [393/737] Linking static target lib/librte_reorder.a 00:02:37.710 [394/737] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:37.710 [395/737] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.710 [396/737] Linking target lib/librte_metrics.so.23.0 00:02:37.710 [397/737] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:37.710 [398/737] Linking target lib/librte_bpf.so.23.0 00:02:37.967 [399/737] Linking target lib/librte_efd.so.23.0 00:02:37.967 [400/737] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:37.967 [401/737] Linking target lib/librte_eventdev.so.23.0 00:02:37.967 [402/737] Linking target lib/librte_gro.so.23.0 00:02:37.967 [403/737] Linking target lib/librte_gso.so.23.0 00:02:37.967 [404/737] Linking target lib/librte_ip_frag.so.23.0 00:02:37.967 [405/737] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.967 [406/737] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.967 [407/737] Linking target lib/librte_lpm.so.23.0 00:02:37.967 [408/737] Linking target lib/librte_member.so.23.0 00:02:38.226 [409/737] Linking target lib/librte_pcapng.so.23.0 00:02:38.226 [410/737] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.226 [411/737] Linking target lib/librte_rawdev.so.23.0 00:02:38.226 [412/737] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.226 [413/737] Linking target lib/librte_regexdev.so.23.0 00:02:38.226 [414/737] Linking target lib/librte_power.so.23.0 00:02:38.226 [415/737] Linking static target lib/librte_rib.a 00:02:38.226 [416/737] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.226 [417/737] Linking target lib/librte_dmadev.so.23.0 00:02:38.226 [418/737] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:38.226 [419/737] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:38.226 [420/737] Linking target lib/librte_reorder.so.23.0 00:02:38.226 [421/737] Linking target lib/librte_bitratestats.so.23.0 00:02:38.226 [422/737] Linking target lib/librte_latencystats.so.23.0 00:02:38.484 [423/737] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:38.484 [424/737] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:38.484 [425/737] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:38.484 [426/737] Generating lib/rte_sched_def with a custom command 00:02:38.484 [427/737] Generating lib/rte_sched_mingw with a custom command 00:02:38.484 [428/737] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:38.484 [429/737] Generating lib/rte_security_def with a custom command 00:02:38.484 [430/737] Generating lib/rte_security_mingw with a custom command 00:02:38.484 [431/737] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:38.484 [432/737] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:38.484 [433/737] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:38.484 [434/737] Generating lib/rte_stack_def with a custom command 00:02:38.484 [435/737] Generating lib/rte_stack_mingw with a custom command 00:02:38.484 [436/737] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:38.484 [437/737] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:38.484 [438/737] Linking static target lib/librte_stack.a 00:02:38.743 [439/737] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:38.743 [440/737] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:38.743 [441/737] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:39.001 [442/737] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:39.001 [443/737] Linking static target lib/librte_security.a 00:02:39.001 [444/737] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:39.001 [445/737] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:39.001 [446/737] Generating lib/rte_vhost_def with a custom command 00:02:39.001 [447/737] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:39.001 [448/737] Linking static target lib/librte_sched.a 00:02:39.001 [449/737] Generating lib/rte_vhost_mingw with a custom command 00:02:39.001 [450/737] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.001 [451/737] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:39.260 [452/737] Linking target lib/librte_stack.so.23.0 00:02:39.260 [453/737] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.260 [454/737] Linking target lib/librte_rib.so.23.0 00:02:39.520 [455/737] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:39.790 [456/737] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:39.790 [457/737] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.790 [458/737] Linking target lib/librte_security.so.23.0 00:02:39.790 [459/737] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:39.790 [460/737] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:39.790 [461/737] Generating lib/rte_ipsec_def with a custom command 00:02:39.790 [462/737] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.790 [463/737] Generating lib/rte_ipsec_mingw with a custom command 00:02:39.790 [464/737] Linking target lib/librte_sched.so.23.0 00:02:40.062 [465/737] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:40.062 [466/737] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:40.062 [467/737] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:40.321 [468/737] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:40.321 [469/737] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:40.321 [470/737] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:40.321 [471/737] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:40.321 [472/737] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:40.321 [473/737] Generating lib/rte_fib_mingw with a custom command 00:02:40.321 [474/737] Generating lib/rte_fib_def with a custom command 00:02:40.580 [475/737] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:40.580 [476/737] Linking static target lib/librte_ipsec.a 00:02:40.580 [477/737] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:40.580 [478/737] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:40.839 [479/737] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:40.839 [480/737] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:40.839 [481/737] Linking static target lib/librte_fib.a 00:02:40.839 [482/737] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:40.839 [483/737] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:41.098 [484/737] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:41.098 [485/737] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:41.098 [486/737] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:41.357 [487/737] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.357 [488/737] Linking target lib/librte_ipsec.so.23.0 00:02:41.357 [489/737] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:41.616 [490/737] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.616 [491/737] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:41.616 [492/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:41.616 [493/737] Linking target lib/librte_fib.so.23.0 00:02:41.616 [494/737] Generating lib/rte_port_mingw with a custom command 00:02:41.616 [495/737] Generating lib/rte_port_def with a custom command 00:02:41.616 [496/737] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:41.616 [497/737] Generating lib/rte_pdump_def with a custom command 00:02:41.616 [498/737] Generating lib/rte_pdump_mingw with a custom command 00:02:41.616 [499/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:41.875 [500/737] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:41.875 [501/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:41.875 [502/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:41.875 [503/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:41.875 [504/737] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:42.134 [505/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:42.135 [506/737] Linking static target lib/librte_port.a 00:02:42.135 [507/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:42.135 [508/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:42.135 [509/737] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:42.394 [510/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:42.394 [511/737] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:42.394 [512/737] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:42.394 [513/737] Linking static target lib/librte_pdump.a 00:02:42.653 [514/737] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:42.912 [515/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:42.912 [516/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:42.912 [517/737] Generating lib/rte_table_def with a custom command 00:02:42.912 [518/737] Generating lib/rte_table_mingw with a custom command 00:02:42.912 [519/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:42.912 [520/737] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:43.171 [521/737] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.171 [522/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:43.171 [523/737] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.171 [524/737] Linking target lib/librte_pdump.so.23.0 00:02:43.171 [525/737] Linking target lib/librte_port.so.23.0 00:02:43.171 [526/737] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:43.171 [527/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:43.171 [528/737] Linking static target lib/librte_table.a 00:02:43.453 [529/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:43.453 [530/737] Generating lib/rte_pipeline_def with a custom command 00:02:43.453 [531/737] Generating lib/rte_pipeline_mingw with a custom command 00:02:43.453 [532/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:43.712 [533/737] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:43.712 [534/737] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:43.971 [535/737] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:43.971 [536/737] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:43.971 [537/737] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:44.229 [538/737] Generating lib/rte_graph_def with a custom command 00:02:44.229 [539/737] Generating lib/rte_graph_mingw with a custom command 00:02:44.229 [540/737] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:44.229 [541/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:44.488 [542/737] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.488 [543/737] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:44.488 [544/737] Linking static target lib/librte_graph.a 00:02:44.488 [545/737] Linking target lib/librte_table.so.23.0 00:02:44.488 [546/737] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:44.488 [547/737] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:44.746 [548/737] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:44.746 [549/737] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:44.746 [550/737] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:45.005 [551/737] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:45.005 [552/737] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:45.005 [553/737] Generating lib/rte_node_def with a custom command 00:02:45.005 [554/737] Generating lib/rte_node_mingw with a custom command 00:02:45.264 [555/737] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:45.264 [556/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:45.264 [557/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:45.264 [558/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:45.264 [559/737] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:45.264 [560/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:45.264 [561/737] Generating drivers/rte_bus_pci_def with a custom command 00:02:45.522 [562/737] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:45.522 [563/737] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:45.522 [564/737] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:45.522 [565/737] Linking static target lib/librte_node.a 00:02:45.523 [566/737] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:45.523 [567/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:45.523 [568/737] Generating drivers/rte_bus_vdev_def with a custom command 00:02:45.523 [569/737] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:45.523 [570/737] Generating drivers/rte_mempool_ring_def with a custom command 00:02:45.523 [571/737] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:45.523 [572/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:45.523 [573/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:45.523 [574/737] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:45.523 [575/737] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:45.523 [576/737] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:45.781 [577/737] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.781 [578/737] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:45.781 [579/737] Linking target lib/librte_graph.so.23.0 00:02:46.040 [580/737] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:46.040 [581/737] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:46.040 [582/737] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:46.040 [583/737] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:46.040 [584/737] Linking static target drivers/librte_bus_vdev.a 00:02:46.040 [585/737] Linking static target drivers/librte_bus_pci.a 00:02:46.040 [586/737] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.040 [587/737] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:46.299 [588/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:46.299 [589/737] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:46.299 [590/737] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:46.299 [591/737] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:46.299 [592/737] Linking target lib/librte_node.so.23.0 00:02:46.557 [593/737] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:46.557 [594/737] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:46.557 [595/737] Linking static target drivers/librte_mempool_ring.a 00:02:46.557 [596/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:46.557 [597/737] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.557 [598/737] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:46.557 [599/737] Linking target drivers/librte_mempool_ring.so.23.0 00:02:46.557 [600/737] Linking target drivers/librte_bus_vdev.so.23.0 00:02:46.557 [601/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:46.815 [602/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:46.815 [603/737] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.815 [604/737] Linking target drivers/librte_bus_pci.so.23.0 00:02:47.073 [605/737] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:47.073 [606/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:47.331 [607/737] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:47.331 [608/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:47.331 [609/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:47.331 [610/737] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:47.894 [611/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:47.894 [612/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:48.151 [613/737] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:48.408 [614/737] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:48.408 [615/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:48.408 [616/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:48.408 [617/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:48.665 [618/737] Generating drivers/rte_net_i40e_def with a custom command 00:02:48.665 [619/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:48.665 [620/737] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:48.921 [621/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:49.484 [622/737] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:49.484 [623/737] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:49.484 [624/737] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:49.484 [625/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:49.741 [626/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:49.741 [627/737] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:49.741 [628/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:49.741 [629/737] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:49.998 [630/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:50.255 [631/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:50.255 [632/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:50.255 [633/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:50.513 [634/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:50.771 [635/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:50.771 [636/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:50.771 [637/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:51.029 [638/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:51.029 [639/737] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:51.029 [640/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:51.029 [641/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:51.029 [642/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:51.029 [643/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:51.286 [644/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:51.286 [645/737] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:51.286 [646/737] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:51.544 [647/737] Linking static target drivers/librte_net_i40e.a 00:02:51.544 [648/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:51.544 [649/737] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:51.544 [650/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:51.544 [651/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:51.802 [652/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:51.802 [653/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:51.802 [654/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:51.802 [655/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:51.802 [656/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:52.060 [657/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:52.060 [658/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:52.060 [659/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:52.318 [660/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:52.318 [661/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:52.576 [662/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:52.576 [663/737] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.576 [664/737] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:52.576 [665/737] Linking static target lib/librte_vhost.a 00:02:52.835 [666/737] Linking target drivers/librte_net_i40e.so.23.0 00:02:52.835 [667/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:52.835 [668/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:53.092 [669/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:53.092 [670/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:53.092 [671/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:53.351 [672/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:53.351 [673/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:53.351 [674/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:53.609 [675/737] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:53.609 [676/737] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:53.609 [677/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:53.609 [678/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:53.867 [679/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:53.867 [680/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:53.867 [681/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:53.867 [682/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:54.125 [683/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:54.125 [684/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:54.125 [685/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:54.125 [686/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:54.383 [687/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:54.383 [688/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:54.383 [689/737] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:54.383 [690/737] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.641 [691/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:54.641 [692/737] Linking target lib/librte_vhost.so.23.0 00:02:54.641 [693/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:54.899 [694/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:55.157 [695/737] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:55.157 [696/737] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:55.157 [697/737] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:55.157 [698/737] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:55.415 [699/737] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:55.673 [700/737] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:55.673 [701/737] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:55.673 [702/737] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:55.673 [703/737] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:56.042 [704/737] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:56.042 [705/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:56.042 [706/737] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:56.314 [707/737] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:56.571 [708/737] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:56.572 [709/737] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:56.572 [710/737] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:56.572 [711/737] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:56.572 [712/737] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:56.829 [713/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:56.829 [714/737] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:56.829 [715/737] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:57.394 [716/737] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:57.394 [717/737] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:58.766 [718/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:58.766 [719/737] Linking static target lib/librte_pipeline.a 00:02:59.331 [720/737] Linking target app/dpdk-test-acl 00:02:59.331 [721/737] Linking target app/dpdk-proc-info 00:02:59.331 [722/737] Linking target app/dpdk-test-bbdev 00:02:59.331 [723/737] Linking target app/dpdk-test-fib 00:02:59.331 [724/737] Linking target app/dpdk-pdump 00:02:59.331 [725/737] Linking target app/dpdk-test-compress-perf 00:02:59.331 [726/737] Linking target app/dpdk-test-crypto-perf 00:02:59.331 [727/737] Linking target app/dpdk-test-eventdev 00:02:59.331 [728/737] Linking target app/dpdk-test-cmdline 00:02:59.896 [729/737] Linking target app/dpdk-test-flow-perf 00:02:59.896 [730/737] Linking target app/dpdk-test-gpudev 00:02:59.896 [731/737] Linking target app/dpdk-test-pipeline 00:02:59.896 [732/737] Linking target app/dpdk-test-regex 00:02:59.896 [733/737] Linking target app/dpdk-test-sad 00:02:59.896 [734/737] Linking target app/dpdk-test-security-perf 00:02:59.896 [735/737] Linking target app/dpdk-testpmd 00:03:02.429 [736/737] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.429 [737/737] Linking target lib/librte_pipeline.so.23.0 00:03:02.429 07:49:08 -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:02.688 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:02.688 [0/1] Installing files. 00:03:02.948 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:02.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:03.211 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.214 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:03.215 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:03.216 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:03.216 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.216 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.216 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:03.216 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:03.216 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.216 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.216 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:03.216 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:03.216 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.216 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.216 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:03.216 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:03.216 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.216 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.216 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:03.216 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:03.216 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.216 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.216 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:03.216 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:03.216 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.216 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.216 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:03.216 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:03.216 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.216 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.216 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:03.216 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:03.216 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.216 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.216 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:03.216 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:03.216 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.216 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.216 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:03.216 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:03.216 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.216 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:03.217 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:03.217 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:03.217 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:03.217 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:03.217 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:03.217 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:03.217 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:03.217 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:03.217 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:03.217 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:03.217 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:03.217 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:03.217 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:03.217 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:03.217 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:03.217 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:03.217 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:03.217 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:03.217 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:03.217 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:03.217 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:03.217 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:03.217 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:03.217 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:03.217 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:03.217 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:03.217 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:03.217 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:03.217 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:03.217 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:03.217 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:03.217 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:03.217 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:03.217 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:03.217 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:03.217 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:03.217 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:03.217 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.217 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:03.217 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:03.218 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:03.218 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:03.218 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:03.218 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:03.218 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:03.218 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:03.218 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:03.218 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:03.218 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:03.218 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:03.218 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:03.218 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:03.218 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:03.218 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:03.218 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:03.218 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:03.218 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:03.218 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:03.218 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:03.218 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:03.218 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:03.218 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:03.218 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:03.218 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:03.218 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:03.218 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:03.218 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:03.218 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:03.218 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:03.218 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:03.218 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:03.218 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:03.218 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:03.218 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:03.218 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:03.218 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:03.218 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:03.218 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:03.218 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:03.218 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:03.218 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:03.218 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:03.218 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:03.218 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:03.218 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.218 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:03.218 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:03.218 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.219 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:03.219 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:03.219 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:03.219 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.219 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:03.219 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:03.219 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:03.219 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.219 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:03.219 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:03.219 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:03.219 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:03.822 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:03.822 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:03.822 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:03.822 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:03.822 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:03.822 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:03.822 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:03.822 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:03.822 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:03.822 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:03.822 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:03.822 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:03.822 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:03.822 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:03.822 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:03.822 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:03.822 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:03.822 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:03.822 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include/ 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/ 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/ 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/ 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/ 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/ 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/ 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/ 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/ 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/ 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include/ 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/ 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/ 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/ 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include/ 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include/ 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include/ 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include/ 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.823 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:03.824 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:04.406 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:04.406 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:04.406 NOTICE: You are using Python 3.6 which is EOL. Starting with v0.62.0, Meson will require Python 3.7 or newer 00:03:04.406 07:49:10 -- common/autobuild_common.sh@189 -- $ uname -s 00:03:04.406 07:49:10 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:04.406 07:49:10 -- common/autobuild_common.sh@200 -- $ cat 00:03:04.406 ************************************ 00:03:04.406 END TEST build_native_dpdk 00:03:04.406 ************************************ 00:03:04.406 07:49:10 -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:04.406 00:03:04.406 real 0m49.098s 00:03:04.406 user 4m52.998s 00:03:04.406 sys 0m59.706s 00:03:04.406 07:49:10 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:04.406 07:49:10 -- common/autotest_common.sh@10 -- $ set +x 00:03:04.406 07:49:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:04.406 07:49:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:04.406 07:49:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:04.406 07:49:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:04.406 07:49:10 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:03:04.406 07:49:10 -- spdk/autobuild.sh@58 -- $ unittest_build 00:03:04.406 07:49:10 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:03:04.406 07:49:10 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:03:04.406 07:49:10 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:03:04.406 07:49:10 -- common/autotest_common.sh@10 -- $ set +x 00:03:04.407 ************************************ 00:03:04.407 START TEST unittest_build 00:03:04.407 ************************************ 00:03:04.407 07:49:10 -- common/autotest_common.sh@1104 -- $ _unittest_build 00:03:04.407 07:49:10 -- common/autobuild_common.sh@402 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-asan --enable-coverage --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-daos --without-shared 00:03:04.407 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:04.407 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:04.407 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:04.666 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:04.926 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:03:04.926 Using 'verbs' RDMA provider 00:03:05.494 WARNING: ISA-L & DPDK crypto cannot be used as nasm ver must be 2.14 or newer. 00:03:05.494 Without ISA-L, there is no software support for crypto or compression, 00:03:05.494 so these features will be disabled. 00:03:05.753 Creating mk/config.mk...done. 00:03:05.753 Creating mk/cc.flags.mk...done. 00:03:05.753 Type 'make' to build. 00:03:05.753 07:49:11 -- common/autobuild_common.sh@403 -- $ make -j10 00:03:06.012 make[1]: Nothing to be done for 'all'. 00:03:06.271 CC lib/ut_mock/mock.o 00:03:06.271 CC lib/log/log.o 00:03:06.271 CC lib/ut/ut.o 00:03:06.271 CC lib/log/log_flags.o 00:03:06.271 CC lib/log/log_deprecated.o 00:03:06.271 LIB libspdk_ut_mock.a 00:03:06.530 LIB libspdk_ut.a 00:03:06.530 LIB libspdk_log.a 00:03:06.530 CC lib/ioat/ioat.o 00:03:06.530 CC lib/dma/dma.o 00:03:06.530 CXX lib/trace_parser/trace.o 00:03:06.530 CC lib/util/base64.o 00:03:06.530 CC lib/util/bit_array.o 00:03:06.530 CC lib/util/cpuset.o 00:03:06.530 CC lib/util/crc16.o 00:03:06.530 CC lib/util/crc32.o 00:03:06.530 CC lib/util/crc32c.o 00:03:06.530 CC lib/vfio_user/host/vfio_user_pci.o 00:03:06.788 CC lib/util/crc32_ieee.o 00:03:06.788 LIB libspdk_dma.a 00:03:06.788 CC lib/util/crc64.o 00:03:06.788 CC lib/util/dif.o 00:03:06.788 CC lib/util/fd.o 00:03:06.788 CC lib/util/file.o 00:03:06.788 LIB libspdk_ioat.a 00:03:06.788 CC lib/vfio_user/host/vfio_user.o 00:03:06.788 CC lib/util/hexlify.o 00:03:06.788 CC lib/util/iov.o 00:03:07.048 CC lib/util/math.o 00:03:07.048 CC lib/util/pipe.o 00:03:07.048 CC lib/util/strerror_tls.o 00:03:07.048 CC lib/util/string.o 00:03:07.048 CC lib/util/uuid.o 00:03:07.048 CC lib/util/fd_group.o 00:03:07.048 LIB libspdk_vfio_user.a 00:03:07.048 CC lib/util/xor.o 00:03:07.048 CC lib/util/zipf.o 00:03:07.306 LIB libspdk_util.a 00:03:07.306 CC lib/rdma/common.o 00:03:07.306 CC lib/env_dpdk/env.o 00:03:07.306 CC lib/json/json_parse.o 00:03:07.306 CC lib/vmd/vmd.o 00:03:07.306 CC lib/rdma/rdma_verbs.o 00:03:07.306 CC lib/conf/conf.o 00:03:07.306 LIB libspdk_trace_parser.a 00:03:07.306 CC lib/idxd/idxd.o 00:03:07.306 CC lib/env_dpdk/memory.o 00:03:07.306 CC lib/json/json_util.o 00:03:07.306 CC lib/env_dpdk/pci.o 00:03:07.565 LIB libspdk_conf.a 00:03:07.565 CC lib/json/json_write.o 00:03:07.565 CC lib/env_dpdk/init.o 00:03:07.565 CC lib/vmd/led.o 00:03:07.565 CC lib/env_dpdk/threads.o 00:03:07.565 LIB libspdk_rdma.a 00:03:07.565 CC lib/env_dpdk/pci_ioat.o 00:03:07.565 CC lib/env_dpdk/pci_virtio.o 00:03:07.565 CC lib/env_dpdk/pci_vmd.o 00:03:07.565 CC lib/idxd/idxd_user.o 00:03:07.565 CC lib/env_dpdk/pci_idxd.o 00:03:07.823 CC lib/env_dpdk/pci_event.o 00:03:07.823 LIB libspdk_vmd.a 00:03:07.823 CC lib/env_dpdk/sigbus_handler.o 00:03:07.823 CC lib/env_dpdk/pci_dpdk.o 00:03:07.823 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:07.823 LIB libspdk_json.a 00:03:07.823 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:07.823 LIB libspdk_idxd.a 00:03:07.823 CC lib/jsonrpc/jsonrpc_server.o 00:03:07.823 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:07.823 CC lib/jsonrpc/jsonrpc_client.o 00:03:07.823 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:08.082 LIB libspdk_jsonrpc.a 00:03:08.082 CC lib/rpc/rpc.o 00:03:08.341 LIB libspdk_env_dpdk.a 00:03:08.341 LIB libspdk_rpc.a 00:03:08.600 CC lib/trace/trace.o 00:03:08.600 CC lib/sock/sock.o 00:03:08.600 CC lib/sock/sock_rpc.o 00:03:08.600 CC lib/trace/trace_flags.o 00:03:08.600 CC lib/notify/notify.o 00:03:08.600 CC lib/trace/trace_rpc.o 00:03:08.600 CC lib/notify/notify_rpc.o 00:03:08.600 LIB libspdk_notify.a 00:03:08.600 LIB libspdk_trace.a 00:03:08.859 LIB libspdk_sock.a 00:03:08.859 CC lib/thread/thread.o 00:03:08.859 CC lib/thread/iobuf.o 00:03:09.118 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:09.118 CC lib/nvme/nvme_ctrlr.o 00:03:09.118 CC lib/nvme/nvme_fabric.o 00:03:09.118 CC lib/nvme/nvme_ns_cmd.o 00:03:09.118 CC lib/nvme/nvme_ns.o 00:03:09.118 CC lib/nvme/nvme_pcie_common.o 00:03:09.118 CC lib/nvme/nvme_pcie.o 00:03:09.118 CC lib/nvme/nvme_qpair.o 00:03:09.377 CC lib/nvme/nvme.o 00:03:09.636 CC lib/nvme/nvme_quirks.o 00:03:09.636 CC lib/nvme/nvme_transport.o 00:03:09.636 CC lib/nvme/nvme_discovery.o 00:03:09.636 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:09.636 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:09.894 LIB libspdk_thread.a 00:03:09.894 CC lib/nvme/nvme_tcp.o 00:03:09.894 CC lib/nvme/nvme_opal.o 00:03:09.894 CC lib/nvme/nvme_io_msg.o 00:03:09.894 CC lib/accel/accel.o 00:03:09.894 CC lib/nvme/nvme_poll_group.o 00:03:10.152 CC lib/nvme/nvme_zns.o 00:03:10.152 CC lib/accel/accel_rpc.o 00:03:10.152 CC lib/nvme/nvme_cuse.o 00:03:10.152 CC lib/nvme/nvme_vfio_user.o 00:03:10.410 CC lib/accel/accel_sw.o 00:03:10.410 CC lib/nvme/nvme_rdma.o 00:03:10.410 CC lib/blob/blobstore.o 00:03:10.410 CC lib/blob/request.o 00:03:10.410 CC lib/blob/zeroes.o 00:03:10.410 CC lib/init/json_config.o 00:03:10.410 CC lib/virtio/virtio.o 00:03:10.669 LIB libspdk_accel.a 00:03:10.669 CC lib/init/subsystem.o 00:03:10.669 CC lib/blob/blob_bs_dev.o 00:03:10.669 CC lib/init/subsystem_rpc.o 00:03:10.669 CC lib/init/rpc.o 00:03:10.669 CC lib/virtio/virtio_vhost_user.o 00:03:10.669 CC lib/virtio/virtio_vfio_user.o 00:03:10.669 CC lib/virtio/virtio_pci.o 00:03:10.669 CC lib/bdev/bdev.o 00:03:10.929 LIB libspdk_init.a 00:03:10.929 CC lib/bdev/bdev_rpc.o 00:03:10.929 CC lib/bdev/bdev_zone.o 00:03:10.929 CC lib/bdev/part.o 00:03:10.929 CC lib/bdev/scsi_nvme.o 00:03:10.929 CC lib/event/app.o 00:03:10.929 CC lib/event/reactor.o 00:03:10.929 CC lib/event/log_rpc.o 00:03:10.929 LIB libspdk_virtio.a 00:03:10.929 CC lib/event/app_rpc.o 00:03:11.188 CC lib/event/scheduler_static.o 00:03:11.188 LIB libspdk_nvme.a 00:03:11.188 LIB libspdk_event.a 00:03:12.126 LIB libspdk_blob.a 00:03:12.126 CC lib/blobfs/blobfs.o 00:03:12.126 CC lib/lvol/lvol.o 00:03:12.126 CC lib/blobfs/tree.o 00:03:12.126 LIB libspdk_bdev.a 00:03:12.126 CC lib/scsi/dev.o 00:03:12.126 CC lib/nbd/nbd.o 00:03:12.126 CC lib/nvmf/ctrlr.o 00:03:12.126 CC lib/nbd/nbd_rpc.o 00:03:12.126 CC lib/scsi/lun.o 00:03:12.126 CC lib/nvmf/ctrlr_discovery.o 00:03:12.126 CC lib/ftl/ftl_core.o 00:03:12.126 CC lib/scsi/port.o 00:03:12.386 CC lib/scsi/scsi.o 00:03:12.386 CC lib/scsi/scsi_bdev.o 00:03:12.386 CC lib/scsi/scsi_pr.o 00:03:12.386 CC lib/scsi/scsi_rpc.o 00:03:12.386 CC lib/nvmf/ctrlr_bdev.o 00:03:12.386 CC lib/nvmf/subsystem.o 00:03:12.386 CC lib/ftl/ftl_init.o 00:03:12.645 LIB libspdk_blobfs.a 00:03:12.645 LIB libspdk_lvol.a 00:03:12.645 LIB libspdk_nbd.a 00:03:12.645 CC lib/ftl/ftl_layout.o 00:03:12.645 CC lib/scsi/task.o 00:03:12.645 CC lib/ftl/ftl_debug.o 00:03:12.645 CC lib/ftl/ftl_io.o 00:03:12.645 CC lib/ftl/ftl_sb.o 00:03:12.645 CC lib/ftl/ftl_l2p.o 00:03:12.645 CC lib/nvmf/nvmf.o 00:03:12.645 LIB libspdk_scsi.a 00:03:12.645 CC lib/ftl/ftl_l2p_flat.o 00:03:12.904 CC lib/ftl/ftl_nv_cache.o 00:03:12.904 CC lib/iscsi/conn.o 00:03:12.904 CC lib/nvmf/nvmf_rpc.o 00:03:12.904 CC lib/ftl/ftl_band.o 00:03:12.904 CC lib/nvmf/transport.o 00:03:12.904 CC lib/ftl/ftl_band_ops.o 00:03:12.904 CC lib/vhost/vhost.o 00:03:13.164 CC lib/nvmf/tcp.o 00:03:13.164 CC lib/nvmf/rdma.o 00:03:13.164 CC lib/ftl/ftl_writer.o 00:03:13.164 CC lib/ftl/ftl_rq.o 00:03:13.164 CC lib/vhost/vhost_rpc.o 00:03:13.164 CC lib/iscsi/init_grp.o 00:03:13.164 CC lib/iscsi/iscsi.o 00:03:13.164 CC lib/ftl/ftl_reloc.o 00:03:13.472 CC lib/iscsi/md5.o 00:03:13.473 CC lib/ftl/ftl_l2p_cache.o 00:03:13.473 CC lib/iscsi/param.o 00:03:13.473 CC lib/vhost/vhost_scsi.o 00:03:13.473 CC lib/iscsi/portal_grp.o 00:03:13.473 CC lib/iscsi/tgt_node.o 00:03:13.473 CC lib/ftl/ftl_p2l.o 00:03:13.473 CC lib/iscsi/iscsi_subsystem.o 00:03:13.473 CC lib/iscsi/iscsi_rpc.o 00:03:13.473 CC lib/vhost/vhost_blk.o 00:03:13.740 CC lib/ftl/mngt/ftl_mngt.o 00:03:13.740 CC lib/iscsi/task.o 00:03:13.740 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:13.740 CC lib/vhost/rte_vhost_user.o 00:03:13.740 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:13.740 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:13.740 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:13.999 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:13.999 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:13.999 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:13.999 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:13.999 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:13.999 LIB libspdk_nvmf.a 00:03:13.999 LIB libspdk_iscsi.a 00:03:13.999 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:13.999 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:13.999 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:13.999 CC lib/ftl/utils/ftl_conf.o 00:03:13.999 CC lib/ftl/utils/ftl_md.o 00:03:13.999 CC lib/ftl/utils/ftl_mempool.o 00:03:14.258 CC lib/ftl/utils/ftl_bitmap.o 00:03:14.259 CC lib/ftl/utils/ftl_property.o 00:03:14.259 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:14.259 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:14.259 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:14.259 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:14.259 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:14.259 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:14.259 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:14.259 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:14.259 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:14.517 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:14.517 CC lib/ftl/base/ftl_base_dev.o 00:03:14.517 LIB libspdk_vhost.a 00:03:14.517 CC lib/ftl/base/ftl_base_bdev.o 00:03:14.517 CC lib/ftl/ftl_trace.o 00:03:14.517 LIB libspdk_ftl.a 00:03:14.776 CC module/env_dpdk/env_dpdk_rpc.o 00:03:14.776 CC module/sock/posix/posix.o 00:03:14.776 CC module/scheduler/gscheduler/gscheduler.o 00:03:14.776 CC module/accel/dsa/accel_dsa.o 00:03:14.776 CC module/accel/error/accel_error.o 00:03:14.776 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:14.776 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:14.776 CC module/accel/ioat/accel_ioat.o 00:03:14.776 CC module/accel/iaa/accel_iaa.o 00:03:15.035 CC module/blob/bdev/blob_bdev.o 00:03:15.035 LIB libspdk_env_dpdk_rpc.a 00:03:15.035 LIB libspdk_scheduler_gscheduler.a 00:03:15.035 CC module/accel/error/accel_error_rpc.o 00:03:15.035 LIB libspdk_scheduler_dpdk_governor.a 00:03:15.035 CC module/accel/ioat/accel_ioat_rpc.o 00:03:15.035 CC module/accel/iaa/accel_iaa_rpc.o 00:03:15.035 LIB libspdk_scheduler_dynamic.a 00:03:15.035 CC module/accel/dsa/accel_dsa_rpc.o 00:03:15.035 LIB libspdk_accel_error.a 00:03:15.035 LIB libspdk_blob_bdev.a 00:03:15.035 LIB libspdk_accel_iaa.a 00:03:15.035 LIB libspdk_accel_ioat.a 00:03:15.035 LIB libspdk_accel_dsa.a 00:03:15.294 CC module/bdev/malloc/bdev_malloc.o 00:03:15.294 CC module/bdev/null/bdev_null.o 00:03:15.294 CC module/blobfs/bdev/blobfs_bdev.o 00:03:15.294 CC module/bdev/nvme/bdev_nvme.o 00:03:15.294 CC module/bdev/error/vbdev_error.o 00:03:15.294 CC module/bdev/gpt/gpt.o 00:03:15.294 CC module/bdev/lvol/vbdev_lvol.o 00:03:15.294 CC module/bdev/delay/vbdev_delay.o 00:03:15.294 CC module/bdev/passthru/vbdev_passthru.o 00:03:15.294 LIB libspdk_sock_posix.a 00:03:15.294 CC module/bdev/gpt/vbdev_gpt.o 00:03:15.294 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:15.294 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:15.294 CC module/bdev/null/bdev_null_rpc.o 00:03:15.294 CC module/bdev/error/vbdev_error_rpc.o 00:03:15.294 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:15.554 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:15.554 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:15.554 LIB libspdk_blobfs_bdev.a 00:03:15.554 LIB libspdk_bdev_gpt.a 00:03:15.554 LIB libspdk_bdev_error.a 00:03:15.554 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:15.554 LIB libspdk_bdev_null.a 00:03:15.554 LIB libspdk_bdev_passthru.a 00:03:15.554 LIB libspdk_bdev_malloc.a 00:03:15.554 LIB libspdk_bdev_delay.a 00:03:15.554 LIB libspdk_bdev_lvol.a 00:03:15.554 CC module/bdev/raid/bdev_raid.o 00:03:15.554 CC module/bdev/split/vbdev_split.o 00:03:15.554 CC module/bdev/raid/bdev_raid_rpc.o 00:03:15.554 CC module/bdev/aio/bdev_aio.o 00:03:15.554 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:15.554 CC module/bdev/ftl/bdev_ftl.o 00:03:15.554 CC module/bdev/daos/bdev_daos.o 00:03:15.813 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:15.813 CC module/bdev/split/vbdev_split_rpc.o 00:03:15.813 CC module/bdev/raid/bdev_raid_sb.o 00:03:15.813 CC module/bdev/aio/bdev_aio_rpc.o 00:03:15.813 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:15.813 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:15.813 LIB libspdk_bdev_split.a 00:03:15.813 CC module/bdev/nvme/nvme_rpc.o 00:03:15.813 CC module/bdev/daos/bdev_daos_rpc.o 00:03:15.813 CC module/bdev/nvme/bdev_mdns_client.o 00:03:15.813 CC module/bdev/raid/raid0.o 00:03:15.813 LIB libspdk_bdev_aio.a 00:03:15.813 LIB libspdk_bdev_zone_block.a 00:03:16.072 CC module/bdev/raid/raid1.o 00:03:16.072 CC module/bdev/nvme/vbdev_opal.o 00:03:16.072 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:16.072 LIB libspdk_bdev_ftl.a 00:03:16.072 LIB libspdk_bdev_daos.a 00:03:16.072 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:16.072 CC module/bdev/raid/concat.o 00:03:16.072 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:16.072 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:16.072 LIB libspdk_bdev_raid.a 00:03:16.072 LIB libspdk_bdev_virtio.a 00:03:16.331 LIB libspdk_bdev_nvme.a 00:03:16.590 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:16.590 CC module/event/subsystems/vmd/vmd.o 00:03:16.590 CC module/event/subsystems/sock/sock.o 00:03:16.590 CC module/event/subsystems/scheduler/scheduler.o 00:03:16.590 CC module/event/subsystems/iobuf/iobuf.o 00:03:16.590 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:16.590 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:16.590 LIB libspdk_event_vhost_blk.a 00:03:16.590 LIB libspdk_event_scheduler.a 00:03:16.590 LIB libspdk_event_sock.a 00:03:16.590 LIB libspdk_event_vmd.a 00:03:16.590 LIB libspdk_event_iobuf.a 00:03:16.849 CC module/event/subsystems/accel/accel.o 00:03:16.849 LIB libspdk_event_accel.a 00:03:17.108 CC module/event/subsystems/bdev/bdev.o 00:03:17.108 LIB libspdk_event_bdev.a 00:03:17.108 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:17.368 CC module/event/subsystems/nbd/nbd.o 00:03:17.368 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:17.368 CC module/event/subsystems/scsi/scsi.o 00:03:17.368 LIB libspdk_event_nbd.a 00:03:17.368 LIB libspdk_event_scsi.a 00:03:17.368 LIB libspdk_event_nvmf.a 00:03:17.626 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:17.626 CC module/event/subsystems/iscsi/iscsi.o 00:03:17.626 LIB libspdk_event_vhost_scsi.a 00:03:17.626 LIB libspdk_event_iscsi.a 00:03:17.883 CXX app/trace/trace.o 00:03:17.883 CC examples/nvme/hello_world/hello_world.o 00:03:17.883 CC examples/vmd/lsvmd/lsvmd.o 00:03:17.883 CC examples/sock/hello_world/hello_sock.o 00:03:17.883 CC examples/accel/perf/accel_perf.o 00:03:17.883 CC examples/ioat/perf/perf.o 00:03:17.883 CC examples/blob/hello_world/hello_blob.o 00:03:17.883 CC examples/bdev/hello_world/hello_bdev.o 00:03:17.883 CC examples/nvmf/nvmf/nvmf.o 00:03:17.883 CC test/accel/dif/dif.o 00:03:18.140 LINK lsvmd 00:03:18.140 LINK ioat_perf 00:03:18.140 LINK hello_world 00:03:18.140 LINK hello_sock 00:03:18.140 LINK hello_bdev 00:03:18.140 LINK hello_blob 00:03:18.140 LINK accel_perf 00:03:18.140 LINK spdk_trace 00:03:18.140 LINK nvmf 00:03:18.140 LINK dif 00:03:18.397 CC app/trace_record/trace_record.o 00:03:18.656 CC examples/ioat/verify/verify.o 00:03:18.656 LINK spdk_trace_record 00:03:18.656 LINK verify 00:03:18.656 CC examples/util/zipf/zipf.o 00:03:18.914 CC examples/vmd/led/led.o 00:03:18.914 CC examples/nvme/reconnect/reconnect.o 00:03:18.914 LINK zipf 00:03:18.914 CC examples/thread/thread/thread_ex.o 00:03:18.914 LINK led 00:03:19.172 LINK reconnect 00:03:19.172 LINK thread 00:03:19.172 CC app/nvmf_tgt/nvmf_main.o 00:03:19.172 CC examples/idxd/perf/perf.o 00:03:19.172 LINK nvmf_tgt 00:03:19.429 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:19.429 LINK idxd_perf 00:03:19.429 LINK interrupt_tgt 00:03:19.687 CC app/iscsi_tgt/iscsi_tgt.o 00:03:19.687 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:19.945 LINK iscsi_tgt 00:03:19.945 CC app/spdk_tgt/spdk_tgt.o 00:03:19.945 CC examples/bdev/bdevperf/bdevperf.o 00:03:20.203 LINK nvme_manage 00:03:20.203 LINK spdk_tgt 00:03:20.203 CC examples/blob/cli/blobcli.o 00:03:20.203 CC test/bdev/bdevio/bdevio.o 00:03:20.203 CC test/app/bdev_svc/bdev_svc.o 00:03:20.462 LINK bdev_svc 00:03:20.462 CC examples/nvme/arbitration/arbitration.o 00:03:20.462 LINK blobcli 00:03:20.462 LINK bdevperf 00:03:20.462 LINK bdevio 00:03:20.720 LINK arbitration 00:03:20.720 CC examples/nvme/hotplug/hotplug.o 00:03:20.979 CC test/blobfs/mkfs/mkfs.o 00:03:20.979 LINK hotplug 00:03:21.238 LINK mkfs 00:03:21.238 CC app/spdk_lspci/spdk_lspci.o 00:03:21.238 LINK spdk_lspci 00:03:21.497 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:21.497 LINK cmb_copy 00:03:21.755 CC examples/nvme/abort/abort.o 00:03:22.014 CC app/spdk_nvme_perf/perf.o 00:03:22.014 LINK abort 00:03:22.014 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:22.014 CC app/spdk_nvme_identify/identify.o 00:03:22.274 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:22.274 LINK nvme_fuzz 00:03:22.274 TEST_HEADER include/spdk/rpc.h 00:03:22.274 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:22.274 TEST_HEADER include/spdk/accel_module.h 00:03:22.274 TEST_HEADER include/spdk/bit_pool.h 00:03:22.274 TEST_HEADER include/spdk/ioat.h 00:03:22.274 TEST_HEADER include/spdk/blobfs.h 00:03:22.274 TEST_HEADER include/spdk/pipe.h 00:03:22.274 TEST_HEADER include/spdk/accel.h 00:03:22.274 TEST_HEADER include/spdk/version.h 00:03:22.274 TEST_HEADER include/spdk/trace_parser.h 00:03:22.274 TEST_HEADER include/spdk/opal_spec.h 00:03:22.274 TEST_HEADER include/spdk/uuid.h 00:03:22.274 TEST_HEADER include/spdk/bdev.h 00:03:22.274 TEST_HEADER include/spdk/hexlify.h 00:03:22.274 TEST_HEADER include/spdk/likely.h 00:03:22.274 TEST_HEADER include/spdk/vhost.h 00:03:22.274 TEST_HEADER include/spdk/memory.h 00:03:22.274 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:22.274 LINK spdk_nvme_perf 00:03:22.274 TEST_HEADER include/spdk/dma.h 00:03:22.274 TEST_HEADER include/spdk/nbd.h 00:03:22.274 LINK pmr_persistence 00:03:22.274 TEST_HEADER include/spdk/env.h 00:03:22.274 TEST_HEADER include/spdk/nvme_zns.h 00:03:22.274 TEST_HEADER include/spdk/env_dpdk.h 00:03:22.274 TEST_HEADER include/spdk/init.h 00:03:22.274 TEST_HEADER include/spdk/fd_group.h 00:03:22.274 TEST_HEADER include/spdk/bdev_module.h 00:03:22.274 TEST_HEADER include/spdk/opal.h 00:03:22.274 CC app/spdk_nvme_discover/discovery_aer.o 00:03:22.274 TEST_HEADER include/spdk/event.h 00:03:22.274 TEST_HEADER include/spdk/base64.h 00:03:22.274 TEST_HEADER include/spdk/nvmf.h 00:03:22.274 TEST_HEADER include/spdk/nvmf_spec.h 00:03:22.274 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:22.533 TEST_HEADER include/spdk/fd.h 00:03:22.533 TEST_HEADER include/spdk/barrier.h 00:03:22.533 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:22.533 TEST_HEADER include/spdk/zipf.h 00:03:22.533 TEST_HEADER include/spdk/scheduler.h 00:03:22.533 TEST_HEADER include/spdk/dif.h 00:03:22.533 CC app/spdk_top/spdk_top.o 00:03:22.533 TEST_HEADER include/spdk/scsi_spec.h 00:03:22.533 TEST_HEADER include/spdk/blob.h 00:03:22.533 TEST_HEADER include/spdk/cpuset.h 00:03:22.533 TEST_HEADER include/spdk/thread.h 00:03:22.533 TEST_HEADER include/spdk/tree.h 00:03:22.533 TEST_HEADER include/spdk/xor.h 00:03:22.533 TEST_HEADER include/spdk/assert.h 00:03:22.533 TEST_HEADER include/spdk/file.h 00:03:22.533 TEST_HEADER include/spdk/endian.h 00:03:22.533 TEST_HEADER include/spdk/notify.h 00:03:22.533 TEST_HEADER include/spdk/util.h 00:03:22.533 TEST_HEADER include/spdk/log.h 00:03:22.533 TEST_HEADER include/spdk/sock.h 00:03:22.533 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:22.533 TEST_HEADER include/spdk/config.h 00:03:22.533 TEST_HEADER include/spdk/histogram_data.h 00:03:22.533 TEST_HEADER include/spdk/nvme_intel.h 00:03:22.533 TEST_HEADER include/spdk/idxd_spec.h 00:03:22.533 TEST_HEADER include/spdk/crc16.h 00:03:22.533 TEST_HEADER include/spdk/bdev_zone.h 00:03:22.533 TEST_HEADER include/spdk/stdinc.h 00:03:22.533 TEST_HEADER include/spdk/vmd.h 00:03:22.533 TEST_HEADER include/spdk/scsi.h 00:03:22.533 TEST_HEADER include/spdk/jsonrpc.h 00:03:22.533 TEST_HEADER include/spdk/blob_bdev.h 00:03:22.533 TEST_HEADER include/spdk/crc32.h 00:03:22.533 TEST_HEADER include/spdk/nvmf_transport.h 00:03:22.533 TEST_HEADER include/spdk/idxd.h 00:03:22.533 TEST_HEADER include/spdk/crc64.h 00:03:22.533 TEST_HEADER include/spdk/nvme.h 00:03:22.534 TEST_HEADER include/spdk/iscsi_spec.h 00:03:22.534 TEST_HEADER include/spdk/queue.h 00:03:22.534 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:22.534 TEST_HEADER include/spdk/lvol.h 00:03:22.534 TEST_HEADER include/spdk/ftl.h 00:03:22.534 TEST_HEADER include/spdk/trace.h 00:03:22.534 TEST_HEADER include/spdk/ioat_spec.h 00:03:22.534 TEST_HEADER include/spdk/conf.h 00:03:22.534 TEST_HEADER include/spdk/ublk.h 00:03:22.534 TEST_HEADER include/spdk/bit_array.h 00:03:22.534 TEST_HEADER include/spdk/pci_ids.h 00:03:22.534 TEST_HEADER include/spdk/nvme_spec.h 00:03:22.534 TEST_HEADER include/spdk/string.h 00:03:22.534 TEST_HEADER include/spdk/gpt_spec.h 00:03:22.534 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:22.534 TEST_HEADER include/spdk/json.h 00:03:22.534 TEST_HEADER include/spdk/reduce.h 00:03:22.534 TEST_HEADER include/spdk/mmio.h 00:03:22.534 LINK spdk_nvme_discover 00:03:22.534 CXX test/cpp_headers/rpc.o 00:03:22.534 LINK spdk_nvme_identify 00:03:22.534 CC test/dma/test_dma/test_dma.o 00:03:22.793 CXX test/cpp_headers/vfio_user_spec.o 00:03:22.793 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:22.793 CXX test/cpp_headers/accel_module.o 00:03:22.793 LINK spdk_top 00:03:22.793 LINK test_dma 00:03:23.052 CXX test/cpp_headers/bit_pool.o 00:03:23.052 CXX test/cpp_headers/ioat.o 00:03:23.052 CXX test/cpp_headers/blobfs.o 00:03:23.052 CXX test/cpp_headers/pipe.o 00:03:23.052 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:23.311 CXX test/cpp_headers/accel.o 00:03:23.311 CC app/vhost/vhost.o 00:03:23.311 CXX test/cpp_headers/version.o 00:03:23.311 CXX test/cpp_headers/trace_parser.o 00:03:23.311 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:23.311 CXX test/cpp_headers/opal_spec.o 00:03:23.311 CXX test/cpp_headers/uuid.o 00:03:23.311 CXX test/cpp_headers/bdev.o 00:03:23.311 CC app/spdk_dd/spdk_dd.o 00:03:23.311 LINK vhost 00:03:23.311 CXX test/cpp_headers/hexlify.o 00:03:23.311 CC app/fio/nvme/fio_plugin.o 00:03:23.571 CC app/fio/bdev/fio_plugin.o 00:03:23.571 CXX test/cpp_headers/likely.o 00:03:23.571 LINK vhost_fuzz 00:03:23.571 CXX test/cpp_headers/vhost.o 00:03:23.571 CC test/app/histogram_perf/histogram_perf.o 00:03:23.571 CXX test/cpp_headers/memory.o 00:03:23.571 LINK iscsi_fuzz 00:03:23.571 LINK spdk_dd 00:03:23.571 CXX test/cpp_headers/vfio_user_pci.o 00:03:23.571 LINK histogram_perf 00:03:23.829 CXX test/cpp_headers/dma.o 00:03:23.829 CXX test/cpp_headers/nbd.o 00:03:23.829 LINK spdk_nvme 00:03:23.829 LINK spdk_bdev 00:03:23.829 CXX test/cpp_headers/env.o 00:03:23.829 CXX test/cpp_headers/nvme_zns.o 00:03:23.829 CXX test/cpp_headers/env_dpdk.o 00:03:24.087 CXX test/cpp_headers/init.o 00:03:24.087 CXX test/cpp_headers/fd_group.o 00:03:24.087 CXX test/cpp_headers/bdev_module.o 00:03:24.087 CXX test/cpp_headers/opal.o 00:03:24.087 CC test/app/jsoncat/jsoncat.o 00:03:24.346 LINK jsoncat 00:03:24.346 CXX test/cpp_headers/event.o 00:03:24.346 CC test/env/mem_callbacks/mem_callbacks.o 00:03:24.346 CC test/env/vtophys/vtophys.o 00:03:24.346 CC test/app/stub/stub.o 00:03:24.346 LINK vtophys 00:03:24.346 CXX test/cpp_headers/base64.o 00:03:24.346 CXX test/cpp_headers/nvmf.o 00:03:24.346 LINK mem_callbacks 00:03:24.604 LINK stub 00:03:24.604 CXX test/cpp_headers/nvmf_spec.o 00:03:24.604 CXX test/cpp_headers/blobfs_bdev.o 00:03:24.862 CXX test/cpp_headers/fd.o 00:03:24.862 CXX test/cpp_headers/barrier.o 00:03:24.862 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:24.862 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:24.862 CC test/event/event_perf/event_perf.o 00:03:24.862 CXX test/cpp_headers/zipf.o 00:03:24.862 LINK env_dpdk_post_init 00:03:25.120 CC test/event/reactor/reactor.o 00:03:25.121 CC test/env/memory/memory_ut.o 00:03:25.121 LINK event_perf 00:03:25.121 CC test/event/reactor_perf/reactor_perf.o 00:03:25.121 LINK reactor 00:03:25.121 CXX test/cpp_headers/scheduler.o 00:03:25.121 LINK reactor_perf 00:03:25.121 CXX test/cpp_headers/dif.o 00:03:25.378 LINK memory_ut 00:03:25.378 CXX test/cpp_headers/scsi_spec.o 00:03:25.378 CC test/event/app_repeat/app_repeat.o 00:03:25.378 CC test/env/pci/pci_ut.o 00:03:25.378 CXX test/cpp_headers/blob.o 00:03:25.378 CXX test/cpp_headers/cpuset.o 00:03:25.378 CXX test/cpp_headers/thread.o 00:03:25.636 LINK app_repeat 00:03:25.636 CXX test/cpp_headers/tree.o 00:03:25.636 CXX test/cpp_headers/xor.o 00:03:25.636 CXX test/cpp_headers/assert.o 00:03:25.636 CXX test/cpp_headers/file.o 00:03:25.636 CC test/event/scheduler/scheduler.o 00:03:25.636 CXX test/cpp_headers/endian.o 00:03:25.636 LINK pci_ut 00:03:25.636 CXX test/cpp_headers/notify.o 00:03:25.636 CXX test/cpp_headers/util.o 00:03:25.636 CXX test/cpp_headers/log.o 00:03:25.894 CXX test/cpp_headers/sock.o 00:03:25.894 CC test/rpc_client/rpc_client_test.o 00:03:25.894 CC test/nvme/aer/aer.o 00:03:25.894 LINK scheduler 00:03:25.894 CC test/lvol/esnap/esnap.o 00:03:25.894 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:25.894 CXX test/cpp_headers/config.o 00:03:25.894 CXX test/cpp_headers/histogram_data.o 00:03:25.894 CXX test/cpp_headers/nvme_intel.o 00:03:25.894 CXX test/cpp_headers/idxd_spec.o 00:03:25.894 LINK rpc_client_test 00:03:25.894 LINK aer 00:03:25.894 CXX test/cpp_headers/crc16.o 00:03:25.894 CXX test/cpp_headers/bdev_zone.o 00:03:26.153 CXX test/cpp_headers/stdinc.o 00:03:26.153 CXX test/cpp_headers/vmd.o 00:03:26.153 CXX test/cpp_headers/scsi.o 00:03:26.153 CXX test/cpp_headers/jsonrpc.o 00:03:26.153 CXX test/cpp_headers/blob_bdev.o 00:03:26.153 CC test/thread/poller_perf/poller_perf.o 00:03:26.153 CC test/nvme/reset/reset.o 00:03:26.153 CC test/nvme/sgl/sgl.o 00:03:26.425 LINK poller_perf 00:03:26.425 CXX test/cpp_headers/crc32.o 00:03:26.425 CXX test/cpp_headers/nvmf_transport.o 00:03:26.425 CC test/thread/lock/spdk_lock.o 00:03:26.425 LINK reset 00:03:26.425 CXX test/cpp_headers/idxd.o 00:03:26.425 LINK sgl 00:03:26.425 CXX test/cpp_headers/crc64.o 00:03:26.425 CXX test/cpp_headers/nvme.o 00:03:26.712 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:26.712 CXX test/cpp_headers/iscsi_spec.o 00:03:26.712 CXX test/cpp_headers/queue.o 00:03:26.712 LINK histogram_ut 00:03:26.712 CXX test/cpp_headers/nvmf_cmd.o 00:03:26.712 CXX test/cpp_headers/lvol.o 00:03:26.712 CC test/nvme/e2edp/nvme_dp.o 00:03:26.972 CXX test/cpp_headers/ftl.o 00:03:26.972 CXX test/cpp_headers/trace.o 00:03:26.972 CC test/nvme/overhead/overhead.o 00:03:26.972 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:26.972 CXX test/cpp_headers/ioat_spec.o 00:03:26.972 CXX test/cpp_headers/conf.o 00:03:26.972 LINK spdk_lock 00:03:26.972 LINK nvme_dp 00:03:27.229 LINK overhead 00:03:27.229 CXX test/cpp_headers/ublk.o 00:03:27.230 CC test/nvme/err_injection/err_injection.o 00:03:27.230 CC test/nvme/startup/startup.o 00:03:27.230 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:27.487 CXX test/cpp_headers/bit_array.o 00:03:27.487 LINK err_injection 00:03:27.487 LINK startup 00:03:27.487 CXX test/cpp_headers/pci_ids.o 00:03:27.746 CXX test/cpp_headers/nvme_spec.o 00:03:27.746 CXX test/cpp_headers/string.o 00:03:27.746 CXX test/cpp_headers/gpt_spec.o 00:03:27.746 CXX test/cpp_headers/nvme_ocssd.o 00:03:27.746 CXX test/cpp_headers/json.o 00:03:28.004 CXX test/cpp_headers/reduce.o 00:03:28.004 CC test/nvme/reserve/reserve.o 00:03:28.004 CXX test/cpp_headers/mmio.o 00:03:28.004 CC test/nvme/simple_copy/simple_copy.o 00:03:28.004 CC test/nvme/connect_stress/connect_stress.o 00:03:28.004 CC test/nvme/boot_partition/boot_partition.o 00:03:28.004 LINK reserve 00:03:28.004 CC test/nvme/compliance/nvme_compliance.o 00:03:28.263 LINK simple_copy 00:03:28.263 LINK accel_ut 00:03:28.263 LINK connect_stress 00:03:28.263 LINK boot_partition 00:03:28.263 CC test/nvme/fused_ordering/fused_ordering.o 00:03:28.263 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:28.263 LINK fused_ordering 00:03:28.263 LINK esnap 00:03:28.522 LINK nvme_compliance 00:03:28.522 LINK doorbell_aers 00:03:28.522 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:28.780 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:28.780 LINK blob_bdev_ut 00:03:28.780 CC test/nvme/fdp/fdp.o 00:03:29.038 CC test/nvme/cuse/cuse.o 00:03:29.038 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:29.038 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:29.038 LINK fdp 00:03:29.038 LINK tree_ut 00:03:29.039 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:29.039 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:29.039 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:29.297 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:29.297 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:29.297 LINK dma_ut 00:03:29.297 LINK blobfs_bdev_ut 00:03:29.555 LINK scsi_nvme_ut 00:03:29.555 LINK cuse 00:03:29.555 LINK blobfs_async_ut 00:03:29.555 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:29.555 CC test/unit/lib/event/app.c/app_ut.o 00:03:29.555 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:29.555 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:29.814 LINK bdev_ut 00:03:29.814 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:29.814 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:29.814 LINK blobfs_sync_ut 00:03:29.814 LINK gpt_ut 00:03:29.814 LINK ioat_ut 00:03:30.073 LINK app_ut 00:03:30.073 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:30.073 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:30.073 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:30.073 CC test/unit/lib/log/log.c/log_ut.o 00:03:30.332 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:30.332 LINK conn_ut 00:03:30.332 LINK log_ut 00:03:30.332 LINK jsonrpc_server_ut 00:03:30.591 LINK vbdev_lvol_ut 00:03:30.591 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:30.591 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:30.591 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:30.850 LINK part_ut 00:03:30.850 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:30.850 LINK reactor_ut 00:03:30.850 LINK json_parse_ut 00:03:30.850 LINK init_grp_ut 00:03:30.850 LINK notify_ut 00:03:31.108 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:31.108 LINK bdev_zone_ut 00:03:31.108 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:31.108 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:31.108 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:31.108 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:31.108 LINK bdev_raid_ut 00:03:31.365 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:31.365 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:31.365 LINK json_util_ut 00:03:31.623 LINK vbdev_zone_block_ut 00:03:31.623 LINK lvol_ut 00:03:31.623 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:31.623 LINK bdev_raid_sb_ut 00:03:31.623 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:31.881 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:31.881 LINK bdev_ut 00:03:31.881 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:31.881 LINK nvme_ut 00:03:32.139 LINK param_ut 00:03:32.139 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:32.139 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:32.139 LINK json_write_ut 00:03:32.139 LINK blob_ut 00:03:32.139 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:32.397 LINK concat_ut 00:03:32.397 LINK iscsi_ut 00:03:32.397 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:32.655 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:32.655 LINK raid1_ut 00:03:32.655 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:32.655 LINK portal_grp_ut 00:03:32.655 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:32.655 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:32.655 LINK dev_ut 00:03:32.913 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:32.913 LINK nvme_ctrlr_ut 00:03:32.913 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:32.913 LINK nvme_ctrlr_cmd_ut 00:03:33.172 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:33.172 LINK tgt_node_ut 00:03:33.172 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:33.172 LINK iobuf_ut 00:03:33.429 LINK posix_ut 00:03:33.429 LINK sock_ut 00:03:33.429 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:33.429 LINK lun_ut 00:03:33.429 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:33.429 LINK thread_ut 00:03:33.429 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:33.687 LINK bdev_nvme_ut 00:03:33.687 LINK base64_ut 00:03:33.687 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:33.687 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:33.687 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:33.687 LINK cpuset_ut 00:03:33.687 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:33.687 LINK tcp_ut 00:03:33.687 LINK bit_array_ut 00:03:33.687 LINK nvme_ns_ut 00:03:33.687 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:33.687 LINK scsi_ut 00:03:33.945 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:33.945 LINK pci_event_ut 00:03:33.945 LINK crc16_ut 00:03:33.945 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:33.945 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:33.945 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:33.945 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:33.945 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:33.945 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:33.945 LINK subsystem_ut 00:03:33.945 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:34.204 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:34.204 LINK rpc_ut 00:03:34.204 LINK crc32_ieee_ut 00:03:34.204 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:03:34.204 LINK crc32c_ut 00:03:34.462 LINK crc64_ut 00:03:34.462 LINK idxd_user_ut 00:03:34.462 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:34.462 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:34.462 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:34.462 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:34.462 CC test/unit/lib/util/math.c/math_ut.o 00:03:34.721 LINK scsi_bdev_ut 00:03:34.721 LINK iov_ut 00:03:34.721 LINK math_ut 00:03:34.721 LINK common_ut 00:03:34.721 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:34.721 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:34.980 CC test/unit/lib/util/string.c/string_ut.o 00:03:34.980 LINK nvme_ns_ocssd_cmd_ut 00:03:34.980 LINK nvme_ns_cmd_ut 00:03:34.980 LINK idxd_ut 00:03:34.980 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:03:35.238 LINK string_ut 00:03:35.238 LINK pipe_ut 00:03:35.238 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:35.238 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:35.238 LINK scsi_pr_ut 00:03:35.238 LINK dif_ut 00:03:35.238 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:03:35.238 LINK ftl_l2p_ut 00:03:35.238 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:35.238 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:35.497 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:35.497 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:35.497 LINK vhost_ut 00:03:35.497 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:03:35.497 LINK xor_ut 00:03:35.756 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:35.756 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:35.756 LINK ctrlr_ut 00:03:35.756 LINK nvme_poll_group_ut 00:03:35.756 LINK nvme_quirks_ut 00:03:35.756 LINK ftl_io_ut 00:03:36.015 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:36.015 LINK ftl_band_ut 00:03:36.015 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:36.015 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:36.015 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:03:36.015 LINK nvme_qpair_ut 00:03:36.274 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:03:36.274 LINK nvme_pcie_ut 00:03:36.274 LINK ftl_bitmap_ut 00:03:36.274 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:36.274 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:03:36.274 LINK ftl_mempool_ut 00:03:36.274 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:36.534 LINK subsystem_ut 00:03:36.534 LINK nvme_io_msg_ut 00:03:36.534 LINK ctrlr_bdev_ut 00:03:36.534 LINK nvme_transport_ut 00:03:36.534 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:03:36.534 LINK ctrlr_discovery_ut 00:03:36.802 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:36.802 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:36.802 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:36.802 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:36.802 LINK ftl_mngt_ut 00:03:36.802 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:37.063 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:03:37.063 LINK nvme_fabric_ut 00:03:37.063 LINK nvme_pcie_common_ut 00:03:37.063 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:03:37.063 LINK nvme_tcp_ut 00:03:37.063 LINK nvme_opal_ut 00:03:37.322 LINK nvmf_ut 00:03:37.322 LINK ftl_sb_ut 00:03:37.580 LINK ftl_layout_upgrade_ut 00:03:37.839 LINK nvme_rdma_ut 00:03:37.839 LINK nvme_cuse_ut 00:03:38.098 LINK transport_ut 00:03:38.098 LINK rdma_ut 00:03:38.356 00:03:38.356 real 0m33.995s 00:03:38.356 user 3m41.420s 00:03:38.356 sys 0m56.128s 00:03:38.356 ************************************ 00:03:38.356 END TEST unittest_build 00:03:38.356 ************************************ 00:03:38.356 07:49:44 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:38.356 07:49:44 -- common/autotest_common.sh@10 -- $ set +x 00:03:38.356 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:03:38.616 07:49:44 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:38.616 07:49:44 -- nvmf/common.sh@7 -- # uname -s 00:03:38.616 07:49:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:38.616 07:49:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:38.616 07:49:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:38.616 07:49:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:38.616 07:49:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:38.616 07:49:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:38.616 07:49:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:38.616 07:49:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:38.616 07:49:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:38.616 07:49:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:38.616 07:49:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3d3a80ad-34f3-41e0-ae6e-6bdf98f71bbc 00:03:38.616 07:49:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=3d3a80ad-34f3-41e0-ae6e-6bdf98f71bbc 00:03:38.616 07:49:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:38.616 07:49:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:38.616 07:49:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:38.616 07:49:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:38.616 07:49:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:38.616 07:49:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:38.616 07:49:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:38.616 07:49:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:03:38.616 07:49:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:03:38.616 07:49:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:03:38.616 07:49:44 -- paths/export.sh@5 -- # export PATH 00:03:38.616 07:49:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:03:38.616 07:49:44 -- nvmf/common.sh@46 -- # : 0 00:03:38.616 07:49:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:38.616 07:49:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:38.616 07:49:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:38.616 07:49:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:38.616 07:49:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:38.616 07:49:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:38.616 07:49:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:38.616 07:49:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:38.616 07:49:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:38.616 07:49:44 -- spdk/autotest.sh@32 -- # uname -s 00:03:38.616 07:49:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:38.616 07:49:44 -- spdk/autotest.sh@33 -- # old_core_pattern=core 00:03:38.616 07:49:44 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:38.616 07:49:44 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:38.616 07:49:44 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:38.616 07:49:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:38.616 modprobe: FATAL: Module nbd not found. 00:03:38.616 07:49:44 -- spdk/autotest.sh@44 -- # true 00:03:38.616 07:49:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:38.616 07:49:44 -- spdk/autotest.sh@46 -- # udevadm=/sbin/udevadm 00:03:38.616 07:49:44 -- spdk/autotest.sh@48 -- # udevadm_pid=43374 00:03:38.616 07:49:44 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:38.616 07:49:44 -- spdk/autotest.sh@47 -- # /sbin/udevadm monitor --property 00:03:38.616 07:49:44 -- spdk/autotest.sh@54 -- # echo 43376 00:03:38.616 07:49:44 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:38.616 07:49:44 -- spdk/autotest.sh@56 -- # echo 43377 00:03:38.616 07:49:44 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:38.616 07:49:44 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:38.616 07:49:44 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:38.616 07:49:44 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:38.616 07:49:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:38.616 07:49:44 -- common/autotest_common.sh@10 -- # set +x 00:03:38.616 07:49:44 -- spdk/autotest.sh@70 -- # create_test_list 00:03:38.616 07:49:44 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:38.616 07:49:44 -- common/autotest_common.sh@10 -- # set +x 00:03:38.616 07:49:44 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:38.616 07:49:44 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:38.616 07:49:44 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:38.616 07:49:44 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:38.616 07:49:44 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:38.616 07:49:44 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:38.616 07:49:44 -- common/autotest_common.sh@1440 -- # uname 00:03:38.616 07:49:44 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:38.616 07:49:44 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:38.616 07:49:44 -- common/autotest_common.sh@1460 -- # uname 00:03:38.616 07:49:44 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:38.616 07:49:44 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:38.616 07:49:44 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:38.616 07:49:44 -- spdk/autotest.sh@83 -- # hash lcov 00:03:38.616 07:49:44 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:38.616 07:49:44 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:38.616 --rc lcov_branch_coverage=1 00:03:38.616 --rc lcov_function_coverage=1 00:03:38.616 --rc genhtml_branch_coverage=1 00:03:38.616 --rc genhtml_function_coverage=1 00:03:38.616 --rc genhtml_legend=1 00:03:38.616 --rc geninfo_all_blocks=1 00:03:38.616 ' 00:03:38.616 07:49:44 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:38.616 --rc lcov_branch_coverage=1 00:03:38.616 --rc lcov_function_coverage=1 00:03:38.616 --rc genhtml_branch_coverage=1 00:03:38.616 --rc genhtml_function_coverage=1 00:03:38.616 --rc genhtml_legend=1 00:03:38.616 --rc geninfo_all_blocks=1 00:03:38.616 ' 00:03:38.616 07:49:44 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:38.616 --rc lcov_branch_coverage=1 00:03:38.616 --rc lcov_function_coverage=1 00:03:38.616 --rc genhtml_branch_coverage=1 00:03:38.616 --rc genhtml_function_coverage=1 00:03:38.616 --rc genhtml_legend=1 00:03:38.616 --rc geninfo_all_blocks=1 00:03:38.616 --no-external' 00:03:38.616 07:49:44 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:38.616 --rc lcov_branch_coverage=1 00:03:38.616 --rc lcov_function_coverage=1 00:03:38.616 --rc genhtml_branch_coverage=1 00:03:38.616 --rc genhtml_function_coverage=1 00:03:38.616 --rc genhtml_legend=1 00:03:38.616 --rc geninfo_all_blocks=1 00:03:38.616 --no-external' 00:03:38.616 07:49:44 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:38.616 lcov: LCOV version 1.15 00:03:38.616 07:49:44 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:46.761 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:46.761 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:46.761 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:46.761 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:46.761 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:46.761 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:01.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:01.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:01.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:01.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:01.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:01.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:01.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:01.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:01.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:01.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:01.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:01.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:01.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:01.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:01.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:01.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:01.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:01.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:01.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:01.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:01.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:01.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:01.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:01.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:01.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:01.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:01.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:01.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:01.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:01.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:01.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:01.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:01.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:01.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:01.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:01.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:01.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:01.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:01.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:01.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:01.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:01.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:01.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:01.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:01.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:01.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:01.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:01.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:01.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:01.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:01.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:01.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:01.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:01.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:01.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:01.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:01.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:01.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:01.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:01.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:01.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:01.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:01.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:01.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:01.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:01.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:01.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:01.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:01.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:01.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:01.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:01.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:01.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:01.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:01.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:01.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:01.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:01.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:01.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:01.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:01.922 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:01.922 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:40.633 07:50:43 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:40.633 07:50:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:40.633 07:50:43 -- common/autotest_common.sh@10 -- # set +x 00:04:40.633 07:50:43 -- spdk/autotest.sh@102 -- # rm -f 00:04:40.633 07:50:43 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.633 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:40.633 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:40.633 07:50:43 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:40.633 07:50:43 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:40.633 07:50:43 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:40.633 07:50:43 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:40.633 07:50:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:40.633 07:50:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:40.633 07:50:43 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:40.633 07:50:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:40.633 07:50:43 -- common/autotest_common.sh@1649 -- # return 1 00:04:40.633 07:50:43 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:40.633 07:50:43 -- spdk/autotest.sh@121 -- # grep -v p 00:04:40.633 07:50:43 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:04:40.633 07:50:43 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:40.633 07:50:43 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:40.633 07:50:43 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:40.633 07:50:43 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:40.633 07:50:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:40.633 No valid GPT data, bailing 00:04:40.633 07:50:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:40.633 07:50:43 -- scripts/common.sh@393 -- # pt= 00:04:40.633 07:50:43 -- scripts/common.sh@394 -- # return 1 00:04:40.633 07:50:43 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:40.633 1+0 records in 00:04:40.633 1+0 records out 00:04:40.633 1048576 bytes (1.0 MB) copied, 0.00454292 s, 231 MB/s 00:04:40.633 07:50:43 -- spdk/autotest.sh@129 -- # sync 00:04:40.633 07:50:43 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:40.633 07:50:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:40.633 07:50:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:40.633 07:50:45 -- spdk/autotest.sh@135 -- # uname -s 00:04:40.633 07:50:45 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:40.633 07:50:45 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:40.633 07:50:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:40.633 07:50:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:40.633 07:50:45 -- common/autotest_common.sh@10 -- # set +x 00:04:40.633 ************************************ 00:04:40.633 START TEST setup.sh 00:04:40.633 ************************************ 00:04:40.633 07:50:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:40.633 * Looking for test storage... 00:04:40.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:40.633 07:50:45 -- setup/test-setup.sh@10 -- # uname -s 00:04:40.633 07:50:45 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:40.633 07:50:45 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:40.633 07:50:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:40.633 07:50:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:40.633 07:50:45 -- common/autotest_common.sh@10 -- # set +x 00:04:40.633 ************************************ 00:04:40.633 START TEST acl 00:04:40.633 ************************************ 00:04:40.633 07:50:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:40.633 * Looking for test storage... 00:04:40.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:40.633 07:50:45 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:40.633 07:50:45 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:40.633 07:50:45 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:40.633 07:50:45 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:40.633 07:50:45 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:40.633 07:50:45 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:40.633 07:50:45 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:40.633 07:50:45 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:40.633 07:50:45 -- common/autotest_common.sh@1649 -- # return 1 00:04:40.633 07:50:45 -- setup/acl.sh@12 -- # devs=() 00:04:40.633 07:50:45 -- setup/acl.sh@12 -- # declare -a devs 00:04:40.633 07:50:45 -- setup/acl.sh@13 -- # drivers=() 00:04:40.633 07:50:45 -- setup/acl.sh@13 -- # declare -A drivers 00:04:40.633 07:50:45 -- setup/acl.sh@51 -- # setup reset 00:04:40.633 07:50:45 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:40.633 07:50:45 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.633 07:50:45 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:40.633 07:50:45 -- setup/acl.sh@16 -- # local dev driver 00:04:40.633 07:50:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.633 07:50:45 -- setup/acl.sh@15 -- # setup output status 00:04:40.633 07:50:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.633 07:50:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:40.633 Hugepages 00:04:40.633 node hugesize free / total 00:04:40.633 07:50:45 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:40.633 07:50:45 -- setup/acl.sh@19 -- # continue 00:04:40.633 07:50:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.633 00:04:40.633 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:40.633 07:50:45 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:40.633 07:50:45 -- setup/acl.sh@19 -- # continue 00:04:40.633 07:50:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.633 07:50:45 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:40.633 07:50:45 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:40.633 07:50:45 -- setup/acl.sh@20 -- # continue 00:04:40.633 07:50:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.633 07:50:45 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:40.633 07:50:45 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:40.633 07:50:45 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:40.633 07:50:45 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:40.633 07:50:45 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:40.633 07:50:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.633 07:50:45 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:40.633 07:50:45 -- setup/acl.sh@54 -- # run_test denied denied 00:04:40.633 07:50:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:40.633 07:50:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:40.633 07:50:45 -- common/autotest_common.sh@10 -- # set +x 00:04:40.633 ************************************ 00:04:40.633 START TEST denied 00:04:40.633 ************************************ 00:04:40.633 07:50:45 -- common/autotest_common.sh@1104 -- # denied 00:04:40.633 07:50:45 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:40.633 07:50:45 -- setup/acl.sh@38 -- # setup output config 00:04:40.633 07:50:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.633 07:50:45 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:40.633 07:50:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:40.633 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:40.633 07:50:46 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:40.633 07:50:46 -- setup/acl.sh@28 -- # local dev driver 00:04:40.633 07:50:46 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:40.633 07:50:46 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:40.633 07:50:46 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:40.633 07:50:46 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:40.633 07:50:46 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:40.633 07:50:46 -- setup/acl.sh@41 -- # setup reset 00:04:40.633 07:50:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:40.633 07:50:46 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.633 00:04:40.633 real 0m0.619s 00:04:40.633 user 0m0.289s 00:04:40.633 sys 0m0.380s 00:04:40.633 ************************************ 00:04:40.633 07:50:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.633 07:50:46 -- common/autotest_common.sh@10 -- # set +x 00:04:40.633 END TEST denied 00:04:40.633 ************************************ 00:04:40.893 07:50:46 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:40.893 07:50:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:40.893 07:50:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:40.893 07:50:46 -- common/autotest_common.sh@10 -- # set +x 00:04:40.893 ************************************ 00:04:40.893 START TEST allowed 00:04:40.893 ************************************ 00:04:40.893 07:50:46 -- common/autotest_common.sh@1104 -- # allowed 00:04:40.893 07:50:46 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:40.893 07:50:46 -- setup/acl.sh@45 -- # setup output config 00:04:40.893 07:50:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.893 07:50:46 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:40.893 07:50:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:41.152 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:41.152 07:50:46 -- setup/acl.sh@47 -- # verify 00:04:41.152 07:50:46 -- setup/acl.sh@28 -- # local dev driver 00:04:41.152 07:50:46 -- setup/acl.sh@48 -- # setup reset 00:04:41.152 07:50:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.152 07:50:46 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:41.720 00:04:41.720 real 0m0.773s 00:04:41.720 user 0m0.284s 00:04:41.720 sys 0m0.479s 00:04:41.720 07:50:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.720 ************************************ 00:04:41.720 END TEST allowed 00:04:41.720 ************************************ 00:04:41.720 07:50:47 -- common/autotest_common.sh@10 -- # set +x 00:04:41.720 00:04:41.720 real 0m2.046s 00:04:41.720 user 0m0.877s 00:04:41.720 sys 0m1.254s 00:04:41.720 07:50:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.720 07:50:47 -- common/autotest_common.sh@10 -- # set +x 00:04:41.720 ************************************ 00:04:41.720 END TEST acl 00:04:41.720 ************************************ 00:04:41.720 07:50:47 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:41.720 07:50:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:41.720 07:50:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:41.720 07:50:47 -- common/autotest_common.sh@10 -- # set +x 00:04:41.720 ************************************ 00:04:41.720 START TEST hugepages 00:04:41.720 ************************************ 00:04:41.720 07:50:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:41.720 * Looking for test storage... 00:04:41.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:41.720 07:50:47 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:41.720 07:50:47 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:41.720 07:50:47 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:41.720 07:50:47 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:41.720 07:50:47 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:41.720 07:50:47 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:41.720 07:50:47 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:41.720 07:50:47 -- setup/common.sh@18 -- # local node= 00:04:41.721 07:50:47 -- setup/common.sh@19 -- # local var val 00:04:41.721 07:50:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.721 07:50:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.721 07:50:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.721 07:50:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.721 07:50:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.721 07:50:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 3337560 kB' 'MemAvailable: 7392060 kB' 'Buffers: 2068 kB' 'Cached: 4216540 kB' 'SwapCached: 0 kB' 'Active: 2808016 kB' 'Inactive: 1498640 kB' 'Active(anon): 88260 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2719756 kB' 'Inactive(file): 1481952 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 88340 kB' 'Mapped: 25408 kB' 'Shmem: 16896 kB' 'Slab: 236448 kB' 'SReclaimable: 174880 kB' 'SUnreclaim: 61568 kB' 'KernelStack: 6048 kB' 'PageTables: 8588 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4053420 kB' 'Committed_AS: 336432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38768 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.721 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.721 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 07:50:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.722 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.722 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 07:50:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.722 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.722 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 07:50:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.722 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.722 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 07:50:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.722 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.722 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 07:50:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.722 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.722 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 07:50:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.722 07:50:47 -- setup/common.sh@32 -- # continue 00:04:41.722 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 07:50:47 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.722 07:50:47 -- setup/common.sh@33 -- # echo 2048 00:04:41.722 07:50:47 -- setup/common.sh@33 -- # return 0 00:04:41.722 07:50:47 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:41.722 07:50:47 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:41.722 07:50:47 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:41.722 07:50:47 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:41.722 07:50:47 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:41.722 07:50:47 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:41.722 07:50:47 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:41.722 07:50:47 -- setup/hugepages.sh@207 -- # get_nodes 00:04:41.722 07:50:47 -- setup/hugepages.sh@27 -- # local node 00:04:41.722 07:50:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.722 07:50:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:41.722 07:50:47 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:41.722 07:50:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.722 07:50:47 -- setup/hugepages.sh@208 -- # clear_hp 00:04:41.722 07:50:47 -- setup/hugepages.sh@37 -- # local node hp 00:04:41.722 07:50:47 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:41.722 07:50:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:41.722 07:50:47 -- setup/hugepages.sh@41 -- # echo 0 00:04:41.722 07:50:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:41.722 07:50:47 -- setup/hugepages.sh@41 -- # echo 0 00:04:41.722 07:50:47 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:41.722 07:50:47 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:41.722 07:50:47 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:41.722 07:50:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:41.722 07:50:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:41.722 07:50:47 -- common/autotest_common.sh@10 -- # set +x 00:04:41.722 ************************************ 00:04:41.722 START TEST default_setup 00:04:41.722 ************************************ 00:04:41.722 07:50:47 -- common/autotest_common.sh@1104 -- # default_setup 00:04:41.722 07:50:47 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:41.722 07:50:47 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:41.722 07:50:47 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:41.722 07:50:47 -- setup/hugepages.sh@51 -- # shift 00:04:41.722 07:50:47 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:41.722 07:50:47 -- setup/hugepages.sh@52 -- # local node_ids 00:04:41.722 07:50:47 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.722 07:50:47 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:41.722 07:50:47 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:41.722 07:50:47 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:41.722 07:50:47 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.722 07:50:47 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:41.722 07:50:47 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:41.722 07:50:47 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.722 07:50:47 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.722 07:50:47 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:41.722 07:50:47 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:41.722 07:50:47 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:41.722 07:50:47 -- setup/hugepages.sh@73 -- # return 0 00:04:41.722 07:50:47 -- setup/hugepages.sh@137 -- # setup output 00:04:41.722 07:50:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.722 07:50:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:41.982 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:42.245 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:42.245 07:50:47 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:42.245 07:50:47 -- setup/hugepages.sh@89 -- # local node 00:04:42.245 07:50:47 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.245 07:50:47 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.245 07:50:47 -- setup/hugepages.sh@92 -- # local surp 00:04:42.245 07:50:47 -- setup/hugepages.sh@93 -- # local resv 00:04:42.245 07:50:47 -- setup/hugepages.sh@94 -- # local anon 00:04:42.245 07:50:47 -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:04:42.245 07:50:47 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.245 07:50:47 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.245 07:50:47 -- setup/common.sh@18 -- # local node= 00:04:42.245 07:50:47 -- setup/common.sh@19 -- # local var val 00:04:42.245 07:50:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.245 07:50:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.245 07:50:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.245 07:50:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.245 07:50:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.245 07:50:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 07:50:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5431168 kB' 'MemAvailable: 9486048 kB' 'Buffers: 2068 kB' 'Cached: 4216648 kB' 'SwapCached: 0 kB' 'Active: 2815468 kB' 'Inactive: 1498724 kB' 'Active(anon): 95688 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2719780 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94480 kB' 'Mapped: 25332 kB' 'Shmem: 16896 kB' 'Slab: 236872 kB' 'SReclaimable: 175152 kB' 'SUnreclaim: 61720 kB' 'KernelStack: 6048 kB' 'PageTables: 7812 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 342752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.245 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.245 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.246 07:50:47 -- setup/common.sh@33 -- # echo 8192 00:04:42.246 07:50:47 -- setup/common.sh@33 -- # return 0 00:04:42.246 07:50:47 -- setup/hugepages.sh@97 -- # anon=8192 00:04:42.246 07:50:47 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.246 07:50:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.246 07:50:47 -- setup/common.sh@18 -- # local node= 00:04:42.246 07:50:47 -- setup/common.sh@19 -- # local var val 00:04:42.246 07:50:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.246 07:50:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.246 07:50:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.246 07:50:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.246 07:50:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.246 07:50:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5431504 kB' 'MemAvailable: 9486384 kB' 'Buffers: 2068 kB' 'Cached: 4216648 kB' 'SwapCached: 0 kB' 'Active: 2815492 kB' 'Inactive: 1498724 kB' 'Active(anon): 95712 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2719780 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94612 kB' 'Mapped: 25332 kB' 'Shmem: 16896 kB' 'Slab: 236872 kB' 'SReclaimable: 175152 kB' 'SUnreclaim: 61720 kB' 'KernelStack: 6048 kB' 'PageTables: 7792 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 342752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.246 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.246 07:50:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.247 07:50:47 -- setup/common.sh@33 -- # echo 0 00:04:42.247 07:50:47 -- setup/common.sh@33 -- # return 0 00:04:42.247 07:50:47 -- setup/hugepages.sh@99 -- # surp=0 00:04:42.247 07:50:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.247 07:50:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.247 07:50:47 -- setup/common.sh@18 -- # local node= 00:04:42.247 07:50:47 -- setup/common.sh@19 -- # local var val 00:04:42.247 07:50:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.247 07:50:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.247 07:50:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.247 07:50:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.247 07:50:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.247 07:50:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5431428 kB' 'MemAvailable: 9486308 kB' 'Buffers: 2068 kB' 'Cached: 4216648 kB' 'SwapCached: 0 kB' 'Active: 2815688 kB' 'Inactive: 1498724 kB' 'Active(anon): 95908 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2719780 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94612 kB' 'Mapped: 25332 kB' 'Shmem: 16896 kB' 'Slab: 236872 kB' 'SReclaimable: 175152 kB' 'SUnreclaim: 61720 kB' 'KernelStack: 6048 kB' 'PageTables: 7792 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 342752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.247 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.247 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.248 07:50:47 -- setup/common.sh@33 -- # echo 0 00:04:42.248 07:50:47 -- setup/common.sh@33 -- # return 0 00:04:42.248 nr_hugepages=1024 00:04:42.248 resv_hugepages=0 00:04:42.248 surplus_hugepages=0 00:04:42.248 anon_hugepages=8192 00:04:42.248 07:50:47 -- setup/hugepages.sh@100 -- # resv=0 00:04:42.248 07:50:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:42.248 07:50:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.248 07:50:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.248 07:50:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=8192 00:04:42.248 07:50:47 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.248 07:50:47 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:42.248 07:50:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.248 07:50:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.248 07:50:47 -- setup/common.sh@18 -- # local node= 00:04:42.248 07:50:47 -- setup/common.sh@19 -- # local var val 00:04:42.248 07:50:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.248 07:50:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.248 07:50:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.248 07:50:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.248 07:50:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.248 07:50:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5431692 kB' 'MemAvailable: 9486572 kB' 'Buffers: 2068 kB' 'Cached: 4216648 kB' 'SwapCached: 0 kB' 'Active: 2815492 kB' 'Inactive: 1498724 kB' 'Active(anon): 95712 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2719780 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94320 kB' 'Mapped: 25332 kB' 'Shmem: 16896 kB' 'Slab: 236872 kB' 'SReclaimable: 175152 kB' 'SUnreclaim: 61720 kB' 'KernelStack: 6048 kB' 'PageTables: 7792 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 342752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.248 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.248 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.249 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.249 07:50:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.249 07:50:47 -- setup/common.sh@33 -- # echo 1024 00:04:42.249 07:50:47 -- setup/common.sh@33 -- # return 0 00:04:42.249 07:50:47 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.249 07:50:47 -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.249 07:50:47 -- setup/hugepages.sh@27 -- # local node 00:04:42.250 07:50:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.250 07:50:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:42.250 07:50:47 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:42.250 07:50:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.250 07:50:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.250 07:50:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.250 07:50:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.250 07:50:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.250 07:50:47 -- setup/common.sh@18 -- # local node=0 00:04:42.250 07:50:47 -- setup/common.sh@19 -- # local var val 00:04:42.250 07:50:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.250 07:50:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.250 07:50:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.250 07:50:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.250 07:50:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.250 07:50:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5431952 kB' 'MemUsed: 6869196 kB' 'Active: 2815492 kB' 'Inactive: 1498724 kB' 'Active(anon): 95712 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2719780 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'FilePages: 4218716 kB' 'Mapped: 25332 kB' 'AnonPages: 93932 kB' 'Shmem: 16896 kB' 'KernelStack: 6048 kB' 'PageTables: 7792 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 236872 kB' 'SReclaimable: 175152 kB' 'SUnreclaim: 61720 kB' 'AnonHugePages: 8192 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # continue 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.250 07:50:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.250 07:50:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.250 07:50:47 -- setup/common.sh@33 -- # echo 0 00:04:42.250 07:50:47 -- setup/common.sh@33 -- # return 0 00:04:42.250 node0=1024 expecting 1024 00:04:42.250 07:50:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.250 07:50:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.250 07:50:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.250 07:50:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.250 07:50:47 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:42.250 07:50:47 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:42.250 00:04:42.250 real 0m0.490s 00:04:42.250 user 0m0.190s 00:04:42.250 sys 0m0.288s 00:04:42.250 07:50:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.250 ************************************ 00:04:42.250 END TEST default_setup 00:04:42.250 ************************************ 00:04:42.250 07:50:47 -- common/autotest_common.sh@10 -- # set +x 00:04:42.250 07:50:47 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:42.250 07:50:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.250 07:50:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.250 07:50:47 -- common/autotest_common.sh@10 -- # set +x 00:04:42.250 ************************************ 00:04:42.250 START TEST per_node_1G_alloc 00:04:42.250 ************************************ 00:04:42.250 07:50:48 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:42.250 07:50:48 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:42.250 07:50:48 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:42.250 07:50:48 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:42.250 07:50:48 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:42.250 07:50:48 -- setup/hugepages.sh@51 -- # shift 00:04:42.250 07:50:48 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:42.250 07:50:48 -- setup/hugepages.sh@52 -- # local node_ids 00:04:42.250 07:50:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.251 07:50:48 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:42.251 07:50:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:42.251 07:50:48 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:42.251 07:50:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.251 07:50:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:42.251 07:50:48 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:42.251 07:50:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.251 07:50:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.251 07:50:48 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:42.251 07:50:48 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:42.251 07:50:48 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:42.251 07:50:48 -- setup/hugepages.sh@73 -- # return 0 00:04:42.251 07:50:48 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:42.251 07:50:48 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:42.251 07:50:48 -- setup/hugepages.sh@146 -- # setup output 00:04:42.251 07:50:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.251 07:50:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:42.515 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:42.515 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.515 07:50:48 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:42.515 07:50:48 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:42.515 07:50:48 -- setup/hugepages.sh@89 -- # local node 00:04:42.515 07:50:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.515 07:50:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.515 07:50:48 -- setup/hugepages.sh@92 -- # local surp 00:04:42.515 07:50:48 -- setup/hugepages.sh@93 -- # local resv 00:04:42.515 07:50:48 -- setup/hugepages.sh@94 -- # local anon 00:04:42.515 07:50:48 -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:04:42.515 07:50:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.516 07:50:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.516 07:50:48 -- setup/common.sh@18 -- # local node= 00:04:42.516 07:50:48 -- setup/common.sh@19 -- # local var val 00:04:42.516 07:50:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.516 07:50:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.516 07:50:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.516 07:50:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.516 07:50:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.516 07:50:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6479820 kB' 'MemAvailable: 10534700 kB' 'Buffers: 2068 kB' 'Cached: 4216648 kB' 'SwapCached: 0 kB' 'Active: 2815428 kB' 'Inactive: 1498724 kB' 'Active(anon): 95648 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2719780 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 95000 kB' 'Mapped: 25332 kB' 'Shmem: 16896 kB' 'Slab: 236872 kB' 'SReclaimable: 175152 kB' 'SUnreclaim: 61720 kB' 'KernelStack: 6048 kB' 'PageTables: 8084 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626284 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.516 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.516 07:50:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.516 07:50:48 -- setup/common.sh@33 -- # echo 8192 00:04:42.516 07:50:48 -- setup/common.sh@33 -- # return 0 00:04:42.516 07:50:48 -- setup/hugepages.sh@97 -- # anon=8192 00:04:42.517 07:50:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.517 07:50:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.517 07:50:48 -- setup/common.sh@18 -- # local node= 00:04:42.517 07:50:48 -- setup/common.sh@19 -- # local var val 00:04:42.517 07:50:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.517 07:50:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.517 07:50:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.517 07:50:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.517 07:50:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.517 07:50:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6479820 kB' 'MemAvailable: 10534700 kB' 'Buffers: 2068 kB' 'Cached: 4216648 kB' 'SwapCached: 0 kB' 'Active: 2815688 kB' 'Inactive: 1498724 kB' 'Active(anon): 95908 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2719780 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 95000 kB' 'Mapped: 25332 kB' 'Shmem: 16896 kB' 'Slab: 236872 kB' 'SReclaimable: 175152 kB' 'SUnreclaim: 61720 kB' 'KernelStack: 6048 kB' 'PageTables: 8084 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626284 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.517 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.517 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.518 07:50:48 -- setup/common.sh@33 -- # echo 0 00:04:42.518 07:50:48 -- setup/common.sh@33 -- # return 0 00:04:42.518 07:50:48 -- setup/hugepages.sh@99 -- # surp=0 00:04:42.518 07:50:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.518 07:50:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.518 07:50:48 -- setup/common.sh@18 -- # local node= 00:04:42.518 07:50:48 -- setup/common.sh@19 -- # local var val 00:04:42.518 07:50:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.518 07:50:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.518 07:50:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.518 07:50:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.518 07:50:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.518 07:50:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6480340 kB' 'MemAvailable: 10535220 kB' 'Buffers: 2068 kB' 'Cached: 4216648 kB' 'SwapCached: 0 kB' 'Active: 2815688 kB' 'Inactive: 1498724 kB' 'Active(anon): 95908 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2719780 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 95000 kB' 'Mapped: 25332 kB' 'Shmem: 16896 kB' 'Slab: 236872 kB' 'SReclaimable: 175152 kB' 'SUnreclaim: 61720 kB' 'KernelStack: 6048 kB' 'PageTables: 8084 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626284 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.518 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.519 07:50:48 -- setup/common.sh@33 -- # echo 0 00:04:42.519 07:50:48 -- setup/common.sh@33 -- # return 0 00:04:42.519 nr_hugepages=512 00:04:42.519 resv_hugepages=0 00:04:42.519 surplus_hugepages=0 00:04:42.519 anon_hugepages=8192 00:04:42.519 07:50:48 -- setup/hugepages.sh@100 -- # resv=0 00:04:42.519 07:50:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:42.519 07:50:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.519 07:50:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.519 07:50:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=8192 00:04:42.519 07:50:48 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:42.519 07:50:48 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:42.519 07:50:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.519 07:50:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.519 07:50:48 -- setup/common.sh@18 -- # local node= 00:04:42.519 07:50:48 -- setup/common.sh@19 -- # local var val 00:04:42.519 07:50:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.519 07:50:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.519 07:50:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.519 07:50:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.519 07:50:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.519 07:50:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6480600 kB' 'MemAvailable: 10535480 kB' 'Buffers: 2068 kB' 'Cached: 4216648 kB' 'SwapCached: 0 kB' 'Active: 2815428 kB' 'Inactive: 1498724 kB' 'Active(anon): 95648 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2719780 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 95000 kB' 'Mapped: 25332 kB' 'Shmem: 16896 kB' 'Slab: 236872 kB' 'SReclaimable: 175152 kB' 'SUnreclaim: 61720 kB' 'KernelStack: 6048 kB' 'PageTables: 8472 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626284 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.519 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.520 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 07:50:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.521 07:50:48 -- setup/common.sh@33 -- # echo 512 00:04:42.521 07:50:48 -- setup/common.sh@33 -- # return 0 00:04:42.521 07:50:48 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:42.521 07:50:48 -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.521 07:50:48 -- setup/hugepages.sh@27 -- # local node 00:04:42.521 07:50:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.521 07:50:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:42.521 07:50:48 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:42.521 07:50:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.521 07:50:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.521 07:50:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.521 07:50:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.521 07:50:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.521 07:50:48 -- setup/common.sh@18 -- # local node=0 00:04:42.521 07:50:48 -- setup/common.sh@19 -- # local var val 00:04:42.521 07:50:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.521 07:50:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.521 07:50:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.521 07:50:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.521 07:50:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.521 07:50:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.521 07:50:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6480496 kB' 'MemUsed: 5820652 kB' 'Active: 2815428 kB' 'Inactive: 1498724 kB' 'Active(anon): 95648 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2719780 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'FilePages: 4218716 kB' 'Mapped: 25332 kB' 'AnonPages: 95292 kB' 'Shmem: 16896 kB' 'KernelStack: 6048 kB' 'PageTables: 8472 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 236872 kB' 'SReclaimable: 175152 kB' 'SUnreclaim: 61720 kB' 'AnonHugePages: 8192 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.521 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.521 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.522 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.522 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.522 07:50:48 -- setup/common.sh@33 -- # echo 0 00:04:42.522 07:50:48 -- setup/common.sh@33 -- # return 0 00:04:42.522 07:50:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.522 07:50:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.522 07:50:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.522 07:50:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.522 07:50:48 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:42.522 node0=512 expecting 512 00:04:42.522 07:50:48 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:42.522 ************************************ 00:04:42.522 END TEST per_node_1G_alloc 00:04:42.522 ************************************ 00:04:42.522 00:04:42.522 real 0m0.289s 00:04:42.522 user 0m0.145s 00:04:42.522 sys 0m0.174s 00:04:42.522 07:50:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.522 07:50:48 -- common/autotest_common.sh@10 -- # set +x 00:04:42.799 07:50:48 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:42.799 07:50:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.799 07:50:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.799 07:50:48 -- common/autotest_common.sh@10 -- # set +x 00:04:42.799 ************************************ 00:04:42.799 START TEST even_2G_alloc 00:04:42.799 ************************************ 00:04:42.799 07:50:48 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:42.799 07:50:48 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:42.799 07:50:48 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:42.799 07:50:48 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:42.799 07:50:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.799 07:50:48 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:42.799 07:50:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:42.799 07:50:48 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:42.799 07:50:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.799 07:50:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:42.799 07:50:48 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:42.799 07:50:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.799 07:50:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.799 07:50:48 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:42.799 07:50:48 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:42.799 07:50:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:42.799 07:50:48 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:42.799 07:50:48 -- setup/hugepages.sh@83 -- # : 0 00:04:42.799 07:50:48 -- setup/hugepages.sh@84 -- # : 0 00:04:42.799 07:50:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:42.799 07:50:48 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:42.799 07:50:48 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:42.799 07:50:48 -- setup/hugepages.sh@153 -- # setup output 00:04:42.799 07:50:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.799 07:50:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:42.799 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:42.799 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.799 07:50:48 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:42.799 07:50:48 -- setup/hugepages.sh@89 -- # local node 00:04:42.799 07:50:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.799 07:50:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.799 07:50:48 -- setup/hugepages.sh@92 -- # local surp 00:04:42.799 07:50:48 -- setup/hugepages.sh@93 -- # local resv 00:04:42.799 07:50:48 -- setup/hugepages.sh@94 -- # local anon 00:04:42.799 07:50:48 -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:04:42.799 07:50:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.799 07:50:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.799 07:50:48 -- setup/common.sh@18 -- # local node= 00:04:42.799 07:50:48 -- setup/common.sh@19 -- # local var val 00:04:42.799 07:50:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.799 07:50:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.799 07:50:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.799 07:50:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.799 07:50:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.799 07:50:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.799 07:50:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5430900 kB' 'MemAvailable: 9485780 kB' 'Buffers: 2068 kB' 'Cached: 4216648 kB' 'SwapCached: 0 kB' 'Active: 2815820 kB' 'Inactive: 1498724 kB' 'Active(anon): 96040 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2719780 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94904 kB' 'Mapped: 25332 kB' 'Shmem: 16896 kB' 'Slab: 236872 kB' 'SReclaimable: 175152 kB' 'SUnreclaim: 61720 kB' 'KernelStack: 6048 kB' 'PageTables: 8084 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.799 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.799 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.800 07:50:48 -- setup/common.sh@33 -- # echo 8192 00:04:42.800 07:50:48 -- setup/common.sh@33 -- # return 0 00:04:42.800 07:50:48 -- setup/hugepages.sh@97 -- # anon=8192 00:04:42.800 07:50:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.800 07:50:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.800 07:50:48 -- setup/common.sh@18 -- # local node= 00:04:42.800 07:50:48 -- setup/common.sh@19 -- # local var val 00:04:42.800 07:50:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.800 07:50:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.800 07:50:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.800 07:50:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.800 07:50:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.800 07:50:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5431160 kB' 'MemAvailable: 9486040 kB' 'Buffers: 2068 kB' 'Cached: 4216648 kB' 'SwapCached: 0 kB' 'Active: 2816080 kB' 'Inactive: 1498724 kB' 'Active(anon): 96300 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2719780 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94904 kB' 'Mapped: 25332 kB' 'Shmem: 16896 kB' 'Slab: 236872 kB' 'SReclaimable: 175152 kB' 'SUnreclaim: 61720 kB' 'KernelStack: 6048 kB' 'PageTables: 8084 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.800 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.800 07:50:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.801 07:50:48 -- setup/common.sh@33 -- # echo 0 00:04:42.801 07:50:48 -- setup/common.sh@33 -- # return 0 00:04:42.801 07:50:48 -- setup/hugepages.sh@99 -- # surp=0 00:04:42.801 07:50:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.801 07:50:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.801 07:50:48 -- setup/common.sh@18 -- # local node= 00:04:42.801 07:50:48 -- setup/common.sh@19 -- # local var val 00:04:42.801 07:50:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.801 07:50:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.801 07:50:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.801 07:50:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.801 07:50:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.801 07:50:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5431416 kB' 'MemAvailable: 9486296 kB' 'Buffers: 2068 kB' 'Cached: 4216648 kB' 'SwapCached: 0 kB' 'Active: 2816080 kB' 'Inactive: 1498724 kB' 'Active(anon): 96300 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2719780 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94904 kB' 'Mapped: 25332 kB' 'Shmem: 16896 kB' 'Slab: 236872 kB' 'SReclaimable: 175152 kB' 'SUnreclaim: 61720 kB' 'KernelStack: 6048 kB' 'PageTables: 8084 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.801 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.801 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # continue 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.802 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.802 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.802 07:50:48 -- setup/common.sh@33 -- # echo 0 00:04:42.802 07:50:48 -- setup/common.sh@33 -- # return 0 00:04:42.802 07:50:48 -- setup/hugepages.sh@100 -- # resv=0 00:04:42.802 nr_hugepages=1024 00:04:42.802 resv_hugepages=0 00:04:42.802 surplus_hugepages=0 00:04:42.802 anon_hugepages=8192 00:04:42.802 07:50:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:42.802 07:50:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.802 07:50:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.802 07:50:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=8192 00:04:42.802 07:50:48 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.802 07:50:48 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:43.064 07:50:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.064 07:50:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.064 07:50:48 -- setup/common.sh@18 -- # local node= 00:04:43.064 07:50:48 -- setup/common.sh@19 -- # local var val 00:04:43.064 07:50:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.064 07:50:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.064 07:50:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.064 07:50:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.064 07:50:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.064 07:50:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.064 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.064 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.064 07:50:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5431676 kB' 'MemAvailable: 9486556 kB' 'Buffers: 2068 kB' 'Cached: 4216648 kB' 'SwapCached: 0 kB' 'Active: 2815820 kB' 'Inactive: 1498724 kB' 'Active(anon): 96040 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2719780 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 95292 kB' 'Mapped: 25332 kB' 'Shmem: 16896 kB' 'Slab: 236872 kB' 'SReclaimable: 175152 kB' 'SUnreclaim: 61720 kB' 'KernelStack: 6048 kB' 'PageTables: 8084 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:43.064 07:50:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.064 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.064 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.064 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.064 07:50:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.064 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.064 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.064 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.064 07:50:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.064 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.064 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.064 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.065 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.065 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.066 07:50:48 -- setup/common.sh@33 -- # echo 1024 00:04:43.066 07:50:48 -- setup/common.sh@33 -- # return 0 00:04:43.066 07:50:48 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.066 07:50:48 -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.066 07:50:48 -- setup/hugepages.sh@27 -- # local node 00:04:43.066 07:50:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.066 07:50:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:43.066 07:50:48 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:43.066 07:50:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.066 07:50:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.066 07:50:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.066 07:50:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.066 07:50:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.066 07:50:48 -- setup/common.sh@18 -- # local node=0 00:04:43.066 07:50:48 -- setup/common.sh@19 -- # local var val 00:04:43.066 07:50:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.066 07:50:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.066 07:50:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.066 07:50:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.066 07:50:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.066 07:50:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.066 07:50:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5431672 kB' 'MemUsed: 6869476 kB' 'Active: 2816080 kB' 'Inactive: 1498724 kB' 'Active(anon): 96300 kB' 'Inactive(anon): 16688 kB' 'Active(file): 2719780 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'FilePages: 4218716 kB' 'Mapped: 25332 kB' 'AnonPages: 95292 kB' 'Shmem: 16896 kB' 'KernelStack: 6048 kB' 'PageTables: 8084 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 236872 kB' 'SReclaimable: 175152 kB' 'SUnreclaim: 61720 kB' 'AnonHugePages: 8192 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.066 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.066 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.067 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.067 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.067 07:50:48 -- setup/common.sh@33 -- # echo 0 00:04:43.067 07:50:48 -- setup/common.sh@33 -- # return 0 00:04:43.067 node0=1024 expecting 1024 00:04:43.067 ************************************ 00:04:43.067 END TEST even_2G_alloc 00:04:43.067 ************************************ 00:04:43.067 07:50:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.067 07:50:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.067 07:50:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.067 07:50:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.067 07:50:48 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:43.067 07:50:48 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:43.067 00:04:43.067 real 0m0.288s 00:04:43.067 user 0m0.150s 00:04:43.067 sys 0m0.166s 00:04:43.067 07:50:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.067 07:50:48 -- common/autotest_common.sh@10 -- # set +x 00:04:43.067 07:50:48 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:43.067 07:50:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.067 07:50:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.067 07:50:48 -- common/autotest_common.sh@10 -- # set +x 00:04:43.067 ************************************ 00:04:43.067 START TEST odd_alloc 00:04:43.067 ************************************ 00:04:43.067 07:50:48 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:43.067 07:50:48 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:43.067 07:50:48 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:43.067 07:50:48 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:43.067 07:50:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.067 07:50:48 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:43.067 07:50:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:43.067 07:50:48 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:43.067 07:50:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.067 07:50:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:43.067 07:50:48 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:43.067 07:50:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.067 07:50:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.067 07:50:48 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:43.067 07:50:48 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:43.067 07:50:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.067 07:50:48 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:43.067 07:50:48 -- setup/hugepages.sh@83 -- # : 0 00:04:43.067 07:50:48 -- setup/hugepages.sh@84 -- # : 0 00:04:43.067 07:50:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.067 07:50:48 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:43.067 07:50:48 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:43.067 07:50:48 -- setup/hugepages.sh@160 -- # setup output 00:04:43.067 07:50:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.067 07:50:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:43.067 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:43.331 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:43.331 07:50:48 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:43.331 07:50:48 -- setup/hugepages.sh@89 -- # local node 00:04:43.331 07:50:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.331 07:50:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.331 07:50:48 -- setup/hugepages.sh@92 -- # local surp 00:04:43.331 07:50:48 -- setup/hugepages.sh@93 -- # local resv 00:04:43.331 07:50:48 -- setup/hugepages.sh@94 -- # local anon 00:04:43.331 07:50:48 -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:04:43.331 07:50:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.331 07:50:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.331 07:50:48 -- setup/common.sh@18 -- # local node= 00:04:43.331 07:50:48 -- setup/common.sh@19 -- # local var val 00:04:43.331 07:50:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.331 07:50:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.331 07:50:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.331 07:50:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.331 07:50:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.331 07:50:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.331 07:50:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5428172 kB' 'MemAvailable: 9483152 kB' 'Buffers: 2068 kB' 'Cached: 4216652 kB' 'SwapCached: 0 kB' 'Active: 2816068 kB' 'Inactive: 1498720 kB' 'Active(anon): 96276 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2719792 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 95808 kB' 'Mapped: 25300 kB' 'Shmem: 16892 kB' 'Slab: 237068 kB' 'SReclaimable: 175240 kB' 'SUnreclaim: 61828 kB' 'KernelStack: 6032 kB' 'PageTables: 7964 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5100972 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.331 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.331 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.332 07:50:48 -- setup/common.sh@33 -- # echo 8192 00:04:43.332 07:50:48 -- setup/common.sh@33 -- # return 0 00:04:43.332 07:50:48 -- setup/hugepages.sh@97 -- # anon=8192 00:04:43.332 07:50:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.332 07:50:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.332 07:50:48 -- setup/common.sh@18 -- # local node= 00:04:43.332 07:50:48 -- setup/common.sh@19 -- # local var val 00:04:43.332 07:50:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.332 07:50:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.332 07:50:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.332 07:50:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.332 07:50:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.332 07:50:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5428172 kB' 'MemAvailable: 9483152 kB' 'Buffers: 2068 kB' 'Cached: 4216652 kB' 'SwapCached: 0 kB' 'Active: 2816264 kB' 'Inactive: 1498720 kB' 'Active(anon): 96472 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2719792 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 95808 kB' 'Mapped: 25300 kB' 'Shmem: 16892 kB' 'Slab: 237068 kB' 'SReclaimable: 175240 kB' 'SUnreclaim: 61828 kB' 'KernelStack: 6032 kB' 'PageTables: 8256 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5100972 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.332 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.332 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.333 07:50:48 -- setup/common.sh@33 -- # echo 0 00:04:43.333 07:50:48 -- setup/common.sh@33 -- # return 0 00:04:43.333 07:50:48 -- setup/hugepages.sh@99 -- # surp=0 00:04:43.333 07:50:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.333 07:50:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.333 07:50:48 -- setup/common.sh@18 -- # local node= 00:04:43.333 07:50:48 -- setup/common.sh@19 -- # local var val 00:04:43.333 07:50:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.333 07:50:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.333 07:50:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.333 07:50:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.333 07:50:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.333 07:50:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5428308 kB' 'MemAvailable: 9483288 kB' 'Buffers: 2068 kB' 'Cached: 4216652 kB' 'SwapCached: 0 kB' 'Active: 2816264 kB' 'Inactive: 1498720 kB' 'Active(anon): 96472 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2719792 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 95808 kB' 'Mapped: 25300 kB' 'Shmem: 16892 kB' 'Slab: 237068 kB' 'SReclaimable: 175240 kB' 'SUnreclaim: 61828 kB' 'KernelStack: 6032 kB' 'PageTables: 8256 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5100972 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.333 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.333 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.334 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.334 07:50:48 -- setup/common.sh@33 -- # echo 0 00:04:43.334 07:50:48 -- setup/common.sh@33 -- # return 0 00:04:43.334 nr_hugepages=1025 00:04:43.334 resv_hugepages=0 00:04:43.334 surplus_hugepages=0 00:04:43.334 anon_hugepages=8192 00:04:43.334 07:50:48 -- setup/hugepages.sh@100 -- # resv=0 00:04:43.334 07:50:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:43.334 07:50:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.334 07:50:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.334 07:50:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=8192 00:04:43.334 07:50:48 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:43.334 07:50:48 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:43.334 07:50:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.334 07:50:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.334 07:50:48 -- setup/common.sh@18 -- # local node= 00:04:43.334 07:50:48 -- setup/common.sh@19 -- # local var val 00:04:43.334 07:50:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.334 07:50:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.334 07:50:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.334 07:50:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.334 07:50:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.334 07:50:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.334 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5427976 kB' 'MemAvailable: 9482956 kB' 'Buffers: 2068 kB' 'Cached: 4216652 kB' 'SwapCached: 0 kB' 'Active: 2816068 kB' 'Inactive: 1498720 kB' 'Active(anon): 96276 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2719792 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 95516 kB' 'Mapped: 25300 kB' 'Shmem: 16892 kB' 'Slab: 237068 kB' 'SReclaimable: 175240 kB' 'SUnreclaim: 61828 kB' 'KernelStack: 6032 kB' 'PageTables: 8256 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5100972 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.335 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.335 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.336 07:50:48 -- setup/common.sh@33 -- # echo 1025 00:04:43.336 07:50:48 -- setup/common.sh@33 -- # return 0 00:04:43.336 07:50:48 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:43.336 07:50:48 -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.336 07:50:48 -- setup/hugepages.sh@27 -- # local node 00:04:43.336 07:50:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.336 07:50:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:43.336 07:50:48 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:43.336 07:50:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.336 07:50:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.336 07:50:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.336 07:50:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.336 07:50:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.336 07:50:48 -- setup/common.sh@18 -- # local node=0 00:04:43.336 07:50:48 -- setup/common.sh@19 -- # local var val 00:04:43.336 07:50:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.336 07:50:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.336 07:50:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.336 07:50:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.336 07:50:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.336 07:50:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5428172 kB' 'MemUsed: 6872976 kB' 'Active: 2816068 kB' 'Inactive: 1498720 kB' 'Active(anon): 96276 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2719792 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'FilePages: 4218720 kB' 'Mapped: 25300 kB' 'AnonPages: 95516 kB' 'Shmem: 16892 kB' 'KernelStack: 6032 kB' 'PageTables: 8256 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 237068 kB' 'SReclaimable: 175240 kB' 'SUnreclaim: 61828 kB' 'AnonHugePages: 8192 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:48 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.336 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.336 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.336 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.336 07:50:49 -- setup/common.sh@33 -- # echo 0 00:04:43.336 07:50:49 -- setup/common.sh@33 -- # return 0 00:04:43.336 node0=1025 expecting 1025 00:04:43.337 07:50:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.337 07:50:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.337 07:50:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.337 07:50:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.337 07:50:49 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:43.337 07:50:49 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:43.337 ************************************ 00:04:43.337 END TEST odd_alloc 00:04:43.337 ************************************ 00:04:43.337 00:04:43.337 real 0m0.315s 00:04:43.337 user 0m0.142s 00:04:43.337 sys 0m0.206s 00:04:43.337 07:50:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.337 07:50:49 -- common/autotest_common.sh@10 -- # set +x 00:04:43.337 07:50:49 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:43.337 07:50:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.337 07:50:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.337 07:50:49 -- common/autotest_common.sh@10 -- # set +x 00:04:43.337 ************************************ 00:04:43.337 START TEST custom_alloc 00:04:43.337 ************************************ 00:04:43.337 07:50:49 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:43.337 07:50:49 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:43.337 07:50:49 -- setup/hugepages.sh@169 -- # local node 00:04:43.337 07:50:49 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:43.337 07:50:49 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:43.337 07:50:49 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:43.337 07:50:49 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:43.337 07:50:49 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:43.337 07:50:49 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:43.337 07:50:49 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.337 07:50:49 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:43.337 07:50:49 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:43.337 07:50:49 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:43.337 07:50:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.337 07:50:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:43.337 07:50:49 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:43.337 07:50:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.337 07:50:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.337 07:50:49 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:43.337 07:50:49 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:43.337 07:50:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.337 07:50:49 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:43.337 07:50:49 -- setup/hugepages.sh@83 -- # : 0 00:04:43.337 07:50:49 -- setup/hugepages.sh@84 -- # : 0 00:04:43.337 07:50:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.337 07:50:49 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:43.337 07:50:49 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:43.337 07:50:49 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:43.337 07:50:49 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:43.337 07:50:49 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:43.337 07:50:49 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:43.337 07:50:49 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:43.337 07:50:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.337 07:50:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:43.337 07:50:49 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:43.337 07:50:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.337 07:50:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.337 07:50:49 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:43.337 07:50:49 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:43.337 07:50:49 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:43.337 07:50:49 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:43.337 07:50:49 -- setup/hugepages.sh@78 -- # return 0 00:04:43.337 07:50:49 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:43.337 07:50:49 -- setup/hugepages.sh@187 -- # setup output 00:04:43.337 07:50:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.337 07:50:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:43.599 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:43.599 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:43.599 07:50:49 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:43.599 07:50:49 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:43.599 07:50:49 -- setup/hugepages.sh@89 -- # local node 00:04:43.599 07:50:49 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.599 07:50:49 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.599 07:50:49 -- setup/hugepages.sh@92 -- # local surp 00:04:43.599 07:50:49 -- setup/hugepages.sh@93 -- # local resv 00:04:43.599 07:50:49 -- setup/hugepages.sh@94 -- # local anon 00:04:43.599 07:50:49 -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:04:43.599 07:50:49 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.599 07:50:49 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.599 07:50:49 -- setup/common.sh@18 -- # local node= 00:04:43.599 07:50:49 -- setup/common.sh@19 -- # local var val 00:04:43.599 07:50:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.599 07:50:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.599 07:50:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.599 07:50:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.599 07:50:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.599 07:50:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.599 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.599 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.599 07:50:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6478844 kB' 'MemAvailable: 10533824 kB' 'Buffers: 2068 kB' 'Cached: 4216652 kB' 'SwapCached: 0 kB' 'Active: 2816068 kB' 'Inactive: 1498720 kB' 'Active(anon): 96276 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2719792 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 94932 kB' 'Mapped: 25300 kB' 'Shmem: 16892 kB' 'Slab: 237068 kB' 'SReclaimable: 175240 kB' 'SUnreclaim: 61828 kB' 'KernelStack: 6032 kB' 'PageTables: 8740 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626284 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:43.599 07:50:49 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.599 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.599 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.599 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.599 07:50:49 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.599 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.599 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.599 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.599 07:50:49 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.599 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.599 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.599 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.599 07:50:49 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.599 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.599 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.599 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.599 07:50:49 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.599 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.599 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.599 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.599 07:50:49 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.599 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.599 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.599 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.599 07:50:49 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.599 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.599 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.599 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.599 07:50:49 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.599 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.599 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.599 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.599 07:50:49 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.599 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.599 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.599 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.600 07:50:49 -- setup/common.sh@33 -- # echo 8192 00:04:43.600 07:50:49 -- setup/common.sh@33 -- # return 0 00:04:43.600 07:50:49 -- setup/hugepages.sh@97 -- # anon=8192 00:04:43.600 07:50:49 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.600 07:50:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.600 07:50:49 -- setup/common.sh@18 -- # local node= 00:04:43.600 07:50:49 -- setup/common.sh@19 -- # local var val 00:04:43.600 07:50:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.600 07:50:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.600 07:50:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.600 07:50:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.600 07:50:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.600 07:50:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6479072 kB' 'MemAvailable: 10534052 kB' 'Buffers: 2068 kB' 'Cached: 4216652 kB' 'SwapCached: 0 kB' 'Active: 2816068 kB' 'Inactive: 1498720 kB' 'Active(anon): 96276 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2719792 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 94932 kB' 'Mapped: 25300 kB' 'Shmem: 16892 kB' 'Slab: 237068 kB' 'SReclaimable: 175240 kB' 'SUnreclaim: 61828 kB' 'KernelStack: 6032 kB' 'PageTables: 8352 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626284 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.600 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.600 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.601 07:50:49 -- setup/common.sh@33 -- # echo 0 00:04:43.601 07:50:49 -- setup/common.sh@33 -- # return 0 00:04:43.601 07:50:49 -- setup/hugepages.sh@99 -- # surp=0 00:04:43.601 07:50:49 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.601 07:50:49 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.601 07:50:49 -- setup/common.sh@18 -- # local node= 00:04:43.601 07:50:49 -- setup/common.sh@19 -- # local var val 00:04:43.601 07:50:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.601 07:50:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.601 07:50:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.601 07:50:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.601 07:50:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.601 07:50:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6479332 kB' 'MemAvailable: 10534312 kB' 'Buffers: 2068 kB' 'Cached: 4216652 kB' 'SwapCached: 0 kB' 'Active: 2816068 kB' 'Inactive: 1498720 kB' 'Active(anon): 96276 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2719792 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 94932 kB' 'Mapped: 25300 kB' 'Shmem: 16892 kB' 'Slab: 237068 kB' 'SReclaimable: 175240 kB' 'SUnreclaim: 61828 kB' 'KernelStack: 6032 kB' 'PageTables: 8352 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626284 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.601 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.601 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.602 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.602 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.602 07:50:49 -- setup/common.sh@33 -- # echo 0 00:04:43.602 07:50:49 -- setup/common.sh@33 -- # return 0 00:04:43.602 nr_hugepages=512 00:04:43.602 resv_hugepages=0 00:04:43.602 surplus_hugepages=0 00:04:43.602 07:50:49 -- setup/hugepages.sh@100 -- # resv=0 00:04:43.602 07:50:49 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:43.602 07:50:49 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.602 07:50:49 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.602 anon_hugepages=8192 00:04:43.602 07:50:49 -- setup/hugepages.sh@105 -- # echo anon_hugepages=8192 00:04:43.602 07:50:49 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:43.602 07:50:49 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:43.602 07:50:49 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.602 07:50:49 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.603 07:50:49 -- setup/common.sh@18 -- # local node= 00:04:43.603 07:50:49 -- setup/common.sh@19 -- # local var val 00:04:43.603 07:50:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.603 07:50:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.603 07:50:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.603 07:50:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.603 07:50:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.603 07:50:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6479592 kB' 'MemAvailable: 10534572 kB' 'Buffers: 2068 kB' 'Cached: 4216652 kB' 'SwapCached: 0 kB' 'Active: 2815808 kB' 'Inactive: 1498720 kB' 'Active(anon): 96016 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2719792 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 94932 kB' 'Mapped: 25300 kB' 'Shmem: 16892 kB' 'Slab: 237068 kB' 'SReclaimable: 175240 kB' 'SUnreclaim: 61828 kB' 'KernelStack: 6032 kB' 'PageTables: 8352 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626284 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.603 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.603 07:50:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.604 07:50:49 -- setup/common.sh@33 -- # echo 512 00:04:43.604 07:50:49 -- setup/common.sh@33 -- # return 0 00:04:43.604 07:50:49 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:43.604 07:50:49 -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.604 07:50:49 -- setup/hugepages.sh@27 -- # local node 00:04:43.604 07:50:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.604 07:50:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:43.604 07:50:49 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:43.604 07:50:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.604 07:50:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.604 07:50:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.604 07:50:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.604 07:50:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.604 07:50:49 -- setup/common.sh@18 -- # local node=0 00:04:43.604 07:50:49 -- setup/common.sh@19 -- # local var val 00:04:43.604 07:50:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.604 07:50:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.604 07:50:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.604 07:50:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.604 07:50:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.604 07:50:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6479516 kB' 'MemUsed: 5821632 kB' 'Active: 2815808 kB' 'Inactive: 1498720 kB' 'Active(anon): 96016 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2719792 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'FilePages: 4218720 kB' 'Mapped: 25300 kB' 'AnonPages: 94932 kB' 'Shmem: 16892 kB' 'KernelStack: 6032 kB' 'PageTables: 8740 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 237068 kB' 'SReclaimable: 175240 kB' 'SUnreclaim: 61828 kB' 'AnonHugePages: 8192 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.604 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.604 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.605 07:50:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.605 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.605 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.605 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.605 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.605 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.605 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.605 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.605 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.605 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.605 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.605 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.605 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.605 07:50:49 -- setup/common.sh@33 -- # echo 0 00:04:43.605 07:50:49 -- setup/common.sh@33 -- # return 0 00:04:43.605 node0=512 expecting 512 00:04:43.605 ************************************ 00:04:43.605 END TEST custom_alloc 00:04:43.605 ************************************ 00:04:43.605 07:50:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.605 07:50:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.605 07:50:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.605 07:50:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.605 07:50:49 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:43.605 07:50:49 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:43.605 00:04:43.605 real 0m0.312s 00:04:43.605 user 0m0.158s 00:04:43.605 sys 0m0.185s 00:04:43.605 07:50:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.605 07:50:49 -- common/autotest_common.sh@10 -- # set +x 00:04:43.864 07:50:49 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:43.864 07:50:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.864 07:50:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.864 07:50:49 -- common/autotest_common.sh@10 -- # set +x 00:04:43.864 ************************************ 00:04:43.864 START TEST no_shrink_alloc 00:04:43.864 ************************************ 00:04:43.864 07:50:49 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:43.864 07:50:49 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:43.864 07:50:49 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:43.864 07:50:49 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:43.864 07:50:49 -- setup/hugepages.sh@51 -- # shift 00:04:43.864 07:50:49 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:43.864 07:50:49 -- setup/hugepages.sh@52 -- # local node_ids 00:04:43.864 07:50:49 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.864 07:50:49 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:43.864 07:50:49 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:43.864 07:50:49 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:43.864 07:50:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.864 07:50:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:43.864 07:50:49 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:43.864 07:50:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.864 07:50:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.864 07:50:49 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:43.864 07:50:49 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:43.864 07:50:49 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:43.864 07:50:49 -- setup/hugepages.sh@73 -- # return 0 00:04:43.864 07:50:49 -- setup/hugepages.sh@198 -- # setup output 00:04:43.864 07:50:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.865 07:50:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:43.865 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:43.865 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:43.865 07:50:49 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:43.865 07:50:49 -- setup/hugepages.sh@89 -- # local node 00:04:43.865 07:50:49 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.865 07:50:49 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.865 07:50:49 -- setup/hugepages.sh@92 -- # local surp 00:04:43.865 07:50:49 -- setup/hugepages.sh@93 -- # local resv 00:04:43.865 07:50:49 -- setup/hugepages.sh@94 -- # local anon 00:04:43.865 07:50:49 -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:04:43.865 07:50:49 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.865 07:50:49 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.865 07:50:49 -- setup/common.sh@18 -- # local node= 00:04:43.865 07:50:49 -- setup/common.sh@19 -- # local var val 00:04:43.865 07:50:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.865 07:50:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.865 07:50:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.865 07:50:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.865 07:50:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.865 07:50:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.865 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 07:50:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5431204 kB' 'MemAvailable: 9486184 kB' 'Buffers: 2068 kB' 'Cached: 4216652 kB' 'SwapCached: 0 kB' 'Active: 2815300 kB' 'Inactive: 1498720 kB' 'Active(anon): 95508 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2719792 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 94892 kB' 'Mapped: 25412 kB' 'Shmem: 16892 kB' 'Slab: 236884 kB' 'SReclaimable: 175240 kB' 'SUnreclaim: 61644 kB' 'KernelStack: 6000 kB' 'PageTables: 7712 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:43.865 07:50:49 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.865 07:50:49 -- setup/common.sh@32 -- # continue 00:04:43.865 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.127 07:50:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.127 07:50:49 -- setup/common.sh@33 -- # echo 8192 00:04:44.127 07:50:49 -- setup/common.sh@33 -- # return 0 00:04:44.127 07:50:49 -- setup/hugepages.sh@97 -- # anon=8192 00:04:44.127 07:50:49 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.127 07:50:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.127 07:50:49 -- setup/common.sh@18 -- # local node= 00:04:44.127 07:50:49 -- setup/common.sh@19 -- # local var val 00:04:44.127 07:50:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.127 07:50:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.127 07:50:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.127 07:50:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.127 07:50:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.127 07:50:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.127 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5431468 kB' 'MemAvailable: 9486448 kB' 'Buffers: 2068 kB' 'Cached: 4216652 kB' 'SwapCached: 0 kB' 'Active: 2815300 kB' 'Inactive: 1498720 kB' 'Active(anon): 95508 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2719792 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 94892 kB' 'Mapped: 25412 kB' 'Shmem: 16892 kB' 'Slab: 236884 kB' 'SReclaimable: 175240 kB' 'SUnreclaim: 61644 kB' 'KernelStack: 6000 kB' 'PageTables: 7712 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.128 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.128 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.129 07:50:49 -- setup/common.sh@33 -- # echo 0 00:04:44.129 07:50:49 -- setup/common.sh@33 -- # return 0 00:04:44.129 07:50:49 -- setup/hugepages.sh@99 -- # surp=0 00:04:44.129 07:50:49 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.129 07:50:49 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.129 07:50:49 -- setup/common.sh@18 -- # local node= 00:04:44.129 07:50:49 -- setup/common.sh@19 -- # local var val 00:04:44.129 07:50:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.129 07:50:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.129 07:50:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.129 07:50:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.129 07:50:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.129 07:50:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5431468 kB' 'MemAvailable: 9486448 kB' 'Buffers: 2068 kB' 'Cached: 4216652 kB' 'SwapCached: 0 kB' 'Active: 2815560 kB' 'Inactive: 1498720 kB' 'Active(anon): 95768 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2719792 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 94504 kB' 'Mapped: 25412 kB' 'Shmem: 16892 kB' 'Slab: 236884 kB' 'SReclaimable: 175240 kB' 'SUnreclaim: 61644 kB' 'KernelStack: 6000 kB' 'PageTables: 7712 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.129 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.129 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.130 07:50:49 -- setup/common.sh@33 -- # echo 0 00:04:44.130 07:50:49 -- setup/common.sh@33 -- # return 0 00:04:44.130 nr_hugepages=1024 00:04:44.130 resv_hugepages=0 00:04:44.130 surplus_hugepages=0 00:04:44.130 anon_hugepages=8192 00:04:44.130 07:50:49 -- setup/hugepages.sh@100 -- # resv=0 00:04:44.130 07:50:49 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:44.130 07:50:49 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.130 07:50:49 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.130 07:50:49 -- setup/hugepages.sh@105 -- # echo anon_hugepages=8192 00:04:44.130 07:50:49 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.130 07:50:49 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:44.130 07:50:49 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.130 07:50:49 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.130 07:50:49 -- setup/common.sh@18 -- # local node= 00:04:44.130 07:50:49 -- setup/common.sh@19 -- # local var val 00:04:44.130 07:50:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.130 07:50:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.130 07:50:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.130 07:50:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.130 07:50:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.130 07:50:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5431732 kB' 'MemAvailable: 9486712 kB' 'Buffers: 2068 kB' 'Cached: 4216652 kB' 'SwapCached: 0 kB' 'Active: 2815560 kB' 'Inactive: 1498720 kB' 'Active(anon): 95768 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2719792 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 94504 kB' 'Mapped: 25412 kB' 'Shmem: 16892 kB' 'Slab: 236884 kB' 'SReclaimable: 175240 kB' 'SUnreclaim: 61644 kB' 'KernelStack: 6000 kB' 'PageTables: 7712 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.130 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.130 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.131 07:50:49 -- setup/common.sh@33 -- # echo 1024 00:04:44.131 07:50:49 -- setup/common.sh@33 -- # return 0 00:04:44.131 07:50:49 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.131 07:50:49 -- setup/hugepages.sh@112 -- # get_nodes 00:04:44.131 07:50:49 -- setup/hugepages.sh@27 -- # local node 00:04:44.131 07:50:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.131 07:50:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:44.131 07:50:49 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:44.131 07:50:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.131 07:50:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.131 07:50:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.131 07:50:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:44.131 07:50:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.131 07:50:49 -- setup/common.sh@18 -- # local node=0 00:04:44.131 07:50:49 -- setup/common.sh@19 -- # local var val 00:04:44.131 07:50:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.131 07:50:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.131 07:50:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:44.131 07:50:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:44.131 07:50:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.131 07:50:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5431992 kB' 'MemUsed: 6869156 kB' 'Active: 2815300 kB' 'Inactive: 1498720 kB' 'Active(anon): 95508 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2719792 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'FilePages: 4218720 kB' 'Mapped: 25412 kB' 'AnonPages: 94892 kB' 'Shmem: 16892 kB' 'KernelStack: 6000 kB' 'PageTables: 7712 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 236884 kB' 'SReclaimable: 175240 kB' 'SUnreclaim: 61644 kB' 'AnonHugePages: 8192 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.131 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.131 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.132 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.132 07:50:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.132 07:50:49 -- setup/common.sh@33 -- # echo 0 00:04:44.132 07:50:49 -- setup/common.sh@33 -- # return 0 00:04:44.132 07:50:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.132 07:50:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.132 07:50:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.132 07:50:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.132 node0=1024 expecting 1024 00:04:44.132 07:50:49 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:44.132 07:50:49 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:44.132 07:50:49 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:44.132 07:50:49 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:44.132 07:50:49 -- setup/hugepages.sh@202 -- # setup output 00:04:44.132 07:50:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.132 07:50:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:44.132 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:44.393 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:44.393 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:44.393 07:50:49 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:44.393 07:50:49 -- setup/hugepages.sh@89 -- # local node 00:04:44.393 07:50:49 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.393 07:50:49 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.393 07:50:49 -- setup/hugepages.sh@92 -- # local surp 00:04:44.393 07:50:49 -- setup/hugepages.sh@93 -- # local resv 00:04:44.393 07:50:49 -- setup/hugepages.sh@94 -- # local anon 00:04:44.393 07:50:49 -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:04:44.393 07:50:49 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.394 07:50:49 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.394 07:50:49 -- setup/common.sh@18 -- # local node= 00:04:44.394 07:50:49 -- setup/common.sh@19 -- # local var val 00:04:44.394 07:50:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.394 07:50:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.394 07:50:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.394 07:50:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.394 07:50:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.394 07:50:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5431484 kB' 'MemAvailable: 9486464 kB' 'Buffers: 2068 kB' 'Cached: 4216652 kB' 'SwapCached: 0 kB' 'Active: 2815496 kB' 'Inactive: 1498720 kB' 'Active(anon): 95704 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2719792 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 94212 kB' 'Mapped: 25412 kB' 'Shmem: 16892 kB' 'Slab: 236884 kB' 'SReclaimable: 175240 kB' 'SUnreclaim: 61644 kB' 'KernelStack: 6000 kB' 'PageTables: 8100 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.394 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.394 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.395 07:50:49 -- setup/common.sh@33 -- # echo 8192 00:04:44.395 07:50:49 -- setup/common.sh@33 -- # return 0 00:04:44.395 07:50:49 -- setup/hugepages.sh@97 -- # anon=8192 00:04:44.395 07:50:49 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.395 07:50:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.395 07:50:49 -- setup/common.sh@18 -- # local node= 00:04:44.395 07:50:49 -- setup/common.sh@19 -- # local var val 00:04:44.395 07:50:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.395 07:50:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.395 07:50:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.395 07:50:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.395 07:50:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.395 07:50:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5432004 kB' 'MemAvailable: 9486984 kB' 'Buffers: 2068 kB' 'Cached: 4216652 kB' 'SwapCached: 0 kB' 'Active: 2815496 kB' 'Inactive: 1498720 kB' 'Active(anon): 95704 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2719792 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 94212 kB' 'Mapped: 25412 kB' 'Shmem: 16892 kB' 'Slab: 236884 kB' 'SReclaimable: 175240 kB' 'SUnreclaim: 61644 kB' 'KernelStack: 6000 kB' 'PageTables: 8100 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:49 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.395 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.395 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.396 07:50:50 -- setup/common.sh@33 -- # echo 0 00:04:44.396 07:50:50 -- setup/common.sh@33 -- # return 0 00:04:44.396 07:50:50 -- setup/hugepages.sh@99 -- # surp=0 00:04:44.396 07:50:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.396 07:50:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.396 07:50:50 -- setup/common.sh@18 -- # local node= 00:04:44.396 07:50:50 -- setup/common.sh@19 -- # local var val 00:04:44.396 07:50:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.396 07:50:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.396 07:50:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.396 07:50:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.396 07:50:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.396 07:50:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5432184 kB' 'MemAvailable: 9487164 kB' 'Buffers: 2068 kB' 'Cached: 4216652 kB' 'SwapCached: 0 kB' 'Active: 2815496 kB' 'Inactive: 1498720 kB' 'Active(anon): 95704 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2719792 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 94600 kB' 'Mapped: 25412 kB' 'Shmem: 16892 kB' 'Slab: 236884 kB' 'SReclaimable: 175240 kB' 'SUnreclaim: 61644 kB' 'KernelStack: 6000 kB' 'PageTables: 8100 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.396 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.396 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.397 07:50:50 -- setup/common.sh@33 -- # echo 0 00:04:44.397 07:50:50 -- setup/common.sh@33 -- # return 0 00:04:44.397 nr_hugepages=1024 00:04:44.397 resv_hugepages=0 00:04:44.397 surplus_hugepages=0 00:04:44.397 anon_hugepages=8192 00:04:44.397 07:50:50 -- setup/hugepages.sh@100 -- # resv=0 00:04:44.397 07:50:50 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:44.397 07:50:50 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.397 07:50:50 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.397 07:50:50 -- setup/hugepages.sh@105 -- # echo anon_hugepages=8192 00:04:44.397 07:50:50 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.397 07:50:50 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:44.397 07:50:50 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.397 07:50:50 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.397 07:50:50 -- setup/common.sh@18 -- # local node= 00:04:44.397 07:50:50 -- setup/common.sh@19 -- # local var val 00:04:44.397 07:50:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.397 07:50:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.397 07:50:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.397 07:50:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.397 07:50:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.397 07:50:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5432444 kB' 'MemAvailable: 9487424 kB' 'Buffers: 2068 kB' 'Cached: 4216652 kB' 'SwapCached: 0 kB' 'Active: 2815432 kB' 'Inactive: 1498720 kB' 'Active(anon): 95640 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2719792 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 94504 kB' 'Mapped: 25412 kB' 'Shmem: 16892 kB' 'Slab: 236884 kB' 'SReclaimable: 175240 kB' 'SUnreclaim: 61644 kB' 'KernelStack: 6000 kB' 'PageTables: 8100 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359683580 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 8192 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 104300 kB' 'DirectMap2M: 5138432 kB' 'DirectMap1G: 9437184 kB' 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.397 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.397 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.398 07:50:50 -- setup/common.sh@33 -- # echo 1024 00:04:44.398 07:50:50 -- setup/common.sh@33 -- # return 0 00:04:44.398 07:50:50 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.398 07:50:50 -- setup/hugepages.sh@112 -- # get_nodes 00:04:44.398 07:50:50 -- setup/hugepages.sh@27 -- # local node 00:04:44.398 07:50:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.398 07:50:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:44.398 07:50:50 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:44.398 07:50:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.398 07:50:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.398 07:50:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.398 07:50:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:44.398 07:50:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.398 07:50:50 -- setup/common.sh@18 -- # local node=0 00:04:44.398 07:50:50 -- setup/common.sh@19 -- # local var val 00:04:44.398 07:50:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.398 07:50:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.398 07:50:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:44.398 07:50:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:44.398 07:50:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.398 07:50:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 5432444 kB' 'MemUsed: 6868704 kB' 'Active: 2815236 kB' 'Inactive: 1498720 kB' 'Active(anon): 95444 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2719792 kB' 'Inactive(file): 1482036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'FilePages: 4218720 kB' 'Mapped: 25412 kB' 'AnonPages: 94504 kB' 'Shmem: 16892 kB' 'KernelStack: 6000 kB' 'PageTables: 8100 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 236884 kB' 'SReclaimable: 175240 kB' 'SUnreclaim: 61644 kB' 'AnonHugePages: 8192 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.398 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.398 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # continue 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.399 07:50:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.399 07:50:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.399 07:50:50 -- setup/common.sh@33 -- # echo 0 00:04:44.399 07:50:50 -- setup/common.sh@33 -- # return 0 00:04:44.399 node0=1024 expecting 1024 00:04:44.399 07:50:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.399 07:50:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.399 07:50:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.399 07:50:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.399 07:50:50 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:44.399 07:50:50 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:44.399 00:04:44.399 real 0m0.624s 00:04:44.399 user 0m0.307s 00:04:44.399 sys 0m0.385s 00:04:44.399 07:50:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.399 07:50:50 -- common/autotest_common.sh@10 -- # set +x 00:04:44.399 ************************************ 00:04:44.399 END TEST no_shrink_alloc 00:04:44.399 ************************************ 00:04:44.399 07:50:50 -- setup/hugepages.sh@217 -- # clear_hp 00:04:44.399 07:50:50 -- setup/hugepages.sh@37 -- # local node hp 00:04:44.399 07:50:50 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:44.399 07:50:50 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:44.399 07:50:50 -- setup/hugepages.sh@41 -- # echo 0 00:04:44.399 07:50:50 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:44.399 07:50:50 -- setup/hugepages.sh@41 -- # echo 0 00:04:44.399 07:50:50 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:44.399 07:50:50 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:44.399 00:04:44.399 real 0m2.765s 00:04:44.399 user 0m1.254s 00:04:44.399 sys 0m1.680s 00:04:44.399 07:50:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.399 07:50:50 -- common/autotest_common.sh@10 -- # set +x 00:04:44.399 ************************************ 00:04:44.399 END TEST hugepages 00:04:44.399 ************************************ 00:04:44.399 07:50:50 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:44.399 07:50:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:44.399 07:50:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:44.399 07:50:50 -- common/autotest_common.sh@10 -- # set +x 00:04:44.399 ************************************ 00:04:44.399 START TEST driver 00:04:44.399 ************************************ 00:04:44.399 07:50:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:44.659 * Looking for test storage... 00:04:44.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:44.659 07:50:50 -- setup/driver.sh@68 -- # setup reset 00:04:44.659 07:50:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:44.659 07:50:50 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:44.918 07:50:50 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:44.918 07:50:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:44.918 07:50:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:44.918 07:50:50 -- common/autotest_common.sh@10 -- # set +x 00:04:44.918 ************************************ 00:04:44.918 START TEST guess_driver 00:04:44.918 ************************************ 00:04:44.918 07:50:50 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:44.918 07:50:50 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:44.918 07:50:50 -- setup/driver.sh@47 -- # local fail=0 00:04:44.919 07:50:50 -- setup/driver.sh@49 -- # pick_driver 00:04:44.919 07:50:50 -- setup/driver.sh@36 -- # vfio 00:04:44.919 07:50:50 -- setup/driver.sh@21 -- # local iommu_grups 00:04:44.919 07:50:50 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:44.919 07:50:50 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:44.919 07:50:50 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:44.919 07:50:50 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:44.919 07:50:50 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:44.919 07:50:50 -- setup/driver.sh@32 -- # return 1 00:04:44.919 07:50:50 -- setup/driver.sh@38 -- # uio 00:04:44.919 07:50:50 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:44.919 07:50:50 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:44.919 07:50:50 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:44.919 07:50:50 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:44.919 07:50:50 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/3.10.0-1160.114.2.el7.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:44.919 insmod /lib/modules/3.10.0-1160.114.2.el7.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:44.919 07:50:50 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:44.919 Looking for driver=uio_pci_generic 00:04:44.919 07:50:50 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:44.919 07:50:50 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:44.919 07:50:50 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:44.919 07:50:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.919 07:50:50 -- setup/driver.sh@45 -- # setup output config 00:04:44.919 07:50:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.919 07:50:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:45.178 07:50:50 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:45.178 07:50:50 -- setup/driver.sh@58 -- # continue 00:04:45.178 07:50:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:45.438 07:50:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:45.438 07:50:51 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:45.438 07:50:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:45.438 07:50:51 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:45.438 07:50:51 -- setup/driver.sh@65 -- # setup reset 00:04:45.438 07:50:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.438 07:50:51 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:45.697 ************************************ 00:04:45.697 END TEST guess_driver 00:04:45.697 ************************************ 00:04:45.697 00:04:45.697 real 0m0.810s 00:04:45.697 user 0m0.290s 00:04:45.697 sys 0m0.509s 00:04:45.697 07:50:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.697 07:50:51 -- common/autotest_common.sh@10 -- # set +x 00:04:45.697 ************************************ 00:04:45.697 END TEST driver 00:04:45.697 ************************************ 00:04:45.697 00:04:45.697 real 0m1.313s 00:04:45.697 user 0m0.476s 00:04:45.697 sys 0m0.825s 00:04:45.697 07:50:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.697 07:50:51 -- common/autotest_common.sh@10 -- # set +x 00:04:45.957 07:50:51 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:45.957 07:50:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:45.957 07:50:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:45.957 07:50:51 -- common/autotest_common.sh@10 -- # set +x 00:04:45.957 ************************************ 00:04:45.957 START TEST devices 00:04:45.957 ************************************ 00:04:45.957 07:50:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:45.957 * Looking for test storage... 00:04:45.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:45.957 07:50:51 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:45.957 07:50:51 -- setup/devices.sh@192 -- # setup reset 00:04:45.957 07:50:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.957 07:50:51 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:46.216 07:50:51 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:46.216 07:50:51 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:46.216 07:50:51 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:46.216 07:50:51 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:46.216 07:50:51 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:46.216 07:50:51 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:46.216 07:50:51 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:46.216 07:50:51 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:46.216 07:50:51 -- common/autotest_common.sh@1649 -- # return 1 00:04:46.216 07:50:51 -- setup/devices.sh@196 -- # blocks=() 00:04:46.216 07:50:51 -- setup/devices.sh@196 -- # declare -a blocks 00:04:46.216 07:50:51 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:46.216 07:50:51 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:46.216 07:50:51 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:46.216 07:50:51 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:46.216 07:50:51 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:46.216 07:50:51 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:46.216 07:50:51 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:46.216 07:50:51 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:46.216 07:50:51 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:46.216 07:50:51 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:46.216 07:50:51 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:46.476 No valid GPT data, bailing 00:04:46.476 07:50:52 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:46.476 07:50:52 -- scripts/common.sh@393 -- # pt= 00:04:46.476 07:50:52 -- scripts/common.sh@394 -- # return 1 00:04:46.476 07:50:52 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:46.476 07:50:52 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:46.476 07:50:52 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:46.476 07:50:52 -- setup/common.sh@80 -- # echo 5368709120 00:04:46.476 07:50:52 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:46.476 07:50:52 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:46.476 07:50:52 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:46.476 07:50:52 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:46.476 07:50:52 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:46.476 07:50:52 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:46.476 07:50:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.476 07:50:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.476 07:50:52 -- common/autotest_common.sh@10 -- # set +x 00:04:46.476 ************************************ 00:04:46.476 START TEST nvme_mount 00:04:46.476 ************************************ 00:04:46.476 07:50:52 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:46.476 07:50:52 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:46.476 07:50:52 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:46.476 07:50:52 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.476 07:50:52 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:46.476 07:50:52 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:46.476 07:50:52 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:46.476 07:50:52 -- setup/common.sh@40 -- # local part_no=1 00:04:46.476 07:50:52 -- setup/common.sh@41 -- # local size=1073741824 00:04:46.476 07:50:52 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:46.476 07:50:52 -- setup/common.sh@44 -- # parts=() 00:04:46.476 07:50:52 -- setup/common.sh@44 -- # local parts 00:04:46.476 07:50:52 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:46.476 07:50:52 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.476 07:50:52 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:46.476 07:50:52 -- setup/common.sh@46 -- # (( part++ )) 00:04:46.476 07:50:52 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.476 07:50:52 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:46.476 07:50:52 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:46.476 07:50:52 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:47.414 Creating new GPT entries. 00:04:47.414 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:47.414 other utilities. 00:04:47.414 07:50:53 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:47.414 07:50:53 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:47.414 07:50:53 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:47.414 07:50:53 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:47.414 07:50:53 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:48.375 Creating new GPT entries. 00:04:48.375 The operation has completed successfully. 00:04:48.375 07:50:54 -- setup/common.sh@57 -- # (( part++ )) 00:04:48.375 07:50:54 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.375 07:50:54 -- setup/common.sh@62 -- # wait 47454 00:04:48.375 07:50:54 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.375 07:50:54 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:48.375 07:50:54 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.375 07:50:54 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:48.375 07:50:54 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:48.633 07:50:54 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.634 07:50:54 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:48.634 07:50:54 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:48.634 07:50:54 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:48.634 07:50:54 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.634 07:50:54 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:48.634 07:50:54 -- setup/devices.sh@53 -- # local found=0 00:04:48.634 07:50:54 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:48.634 07:50:54 -- setup/devices.sh@56 -- # : 00:04:48.634 07:50:54 -- setup/devices.sh@59 -- # local pci status 00:04:48.634 07:50:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.634 07:50:54 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:48.634 07:50:54 -- setup/devices.sh@47 -- # setup output config 00:04:48.634 07:50:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.634 07:50:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:48.892 07:50:54 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:48.892 07:50:54 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:48.892 07:50:54 -- setup/devices.sh@63 -- # found=1 00:04:48.892 07:50:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.892 07:50:54 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:48.892 07:50:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.892 07:50:54 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:48.892 07:50:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.892 07:50:54 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:48.892 07:50:54 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:48.892 07:50:54 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.892 07:50:54 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:48.892 07:50:54 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:48.892 07:50:54 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:48.892 07:50:54 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.892 07:50:54 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.892 07:50:54 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.892 07:50:54 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:48.892 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:48.892 07:50:54 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:48.892 07:50:54 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:49.150 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:49.150 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:49.150 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:49.150 /dev/nvme0n1: calling ioclt to re-read partition table: Success 00:04:49.150 07:50:54 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:49.150 07:50:54 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:49.150 07:50:54 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:49.150 07:50:54 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:49.150 07:50:54 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:49.150 07:50:54 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:49.150 07:50:54 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:49.150 07:50:54 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:49.150 07:50:54 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:49.150 07:50:54 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:49.150 07:50:54 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:49.150 07:50:54 -- setup/devices.sh@53 -- # local found=0 00:04:49.150 07:50:54 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:49.150 07:50:54 -- setup/devices.sh@56 -- # : 00:04:49.150 07:50:54 -- setup/devices.sh@59 -- # local pci status 00:04:49.150 07:50:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.150 07:50:54 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:49.150 07:50:54 -- setup/devices.sh@47 -- # setup output config 00:04:49.150 07:50:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.150 07:50:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:49.408 07:50:54 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:49.408 07:50:54 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:49.408 07:50:54 -- setup/devices.sh@63 -- # found=1 00:04:49.408 07:50:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.408 07:50:55 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:49.408 07:50:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.408 07:50:55 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:49.408 07:50:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.408 07:50:55 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.408 07:50:55 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:49.408 07:50:55 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:49.408 07:50:55 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:49.408 07:50:55 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:49.408 07:50:55 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:49.408 07:50:55 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:49.408 07:50:55 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:49.408 07:50:55 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:49.408 07:50:55 -- setup/devices.sh@50 -- # local mount_point= 00:04:49.408 07:50:55 -- setup/devices.sh@51 -- # local test_file= 00:04:49.408 07:50:55 -- setup/devices.sh@53 -- # local found=0 00:04:49.408 07:50:55 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:49.408 07:50:55 -- setup/devices.sh@59 -- # local pci status 00:04:49.408 07:50:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.408 07:50:55 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:49.408 07:50:55 -- setup/devices.sh@47 -- # setup output config 00:04:49.408 07:50:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.408 07:50:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:49.666 07:50:55 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:49.666 07:50:55 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:49.666 07:50:55 -- setup/devices.sh@63 -- # found=1 00:04:49.666 07:50:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.666 07:50:55 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:49.666 07:50:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.666 07:50:55 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:49.666 07:50:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.666 07:50:55 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.666 07:50:55 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:49.666 07:50:55 -- setup/devices.sh@68 -- # return 0 00:04:49.666 07:50:55 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:49.666 07:50:55 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:49.923 07:50:55 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:49.923 07:50:55 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:49.924 07:50:55 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:49.924 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:49.924 ************************************ 00:04:49.924 END TEST nvme_mount 00:04:49.924 ************************************ 00:04:49.924 00:04:49.924 real 0m3.428s 00:04:49.924 user 0m0.444s 00:04:49.924 sys 0m0.840s 00:04:49.924 07:50:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.924 07:50:55 -- common/autotest_common.sh@10 -- # set +x 00:04:49.924 07:50:55 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:49.924 07:50:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.924 07:50:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.924 07:50:55 -- common/autotest_common.sh@10 -- # set +x 00:04:49.924 ************************************ 00:04:49.924 START TEST dm_mount 00:04:49.924 ************************************ 00:04:49.924 07:50:55 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:49.924 07:50:55 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:49.924 07:50:55 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:49.924 07:50:55 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:49.924 07:50:55 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:49.924 07:50:55 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:49.924 07:50:55 -- setup/common.sh@40 -- # local part_no=2 00:04:49.924 07:50:55 -- setup/common.sh@41 -- # local size=1073741824 00:04:49.924 07:50:55 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:49.924 07:50:55 -- setup/common.sh@44 -- # parts=() 00:04:49.924 07:50:55 -- setup/common.sh@44 -- # local parts 00:04:49.924 07:50:55 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:49.924 07:50:55 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:49.924 07:50:55 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:49.924 07:50:55 -- setup/common.sh@46 -- # (( part++ )) 00:04:49.924 07:50:55 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:49.924 07:50:55 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:49.924 07:50:55 -- setup/common.sh@46 -- # (( part++ )) 00:04:49.924 07:50:55 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:49.924 07:50:55 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:49.924 07:50:55 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:49.924 07:50:55 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:50.856 Creating new GPT entries. 00:04:50.856 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:50.856 other utilities. 00:04:50.856 07:50:56 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:50.856 07:50:56 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:50.856 07:50:56 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:50.856 07:50:56 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:50.856 07:50:56 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:52.233 Creating new GPT entries. 00:04:52.233 The operation has completed successfully. 00:04:52.233 07:50:57 -- setup/common.sh@57 -- # (( part++ )) 00:04:52.233 07:50:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:52.233 07:50:57 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:52.233 07:50:57 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:52.233 07:50:57 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:53.170 The operation has completed successfully. 00:04:53.170 07:50:58 -- setup/common.sh@57 -- # (( part++ )) 00:04:53.170 07:50:58 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.170 07:50:58 -- setup/common.sh@62 -- # wait 47778 00:04:53.170 07:50:58 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:53.170 07:50:58 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.170 07:50:58 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:53.170 07:50:58 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:53.170 07:50:58 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:53.170 07:50:58 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:53.170 07:50:58 -- setup/devices.sh@161 -- # break 00:04:53.170 07:50:58 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:53.170 07:50:58 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:53.170 07:50:58 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:53.170 07:50:58 -- setup/devices.sh@166 -- # dm=dm-0 00:04:53.170 07:50:58 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:53.170 07:50:58 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:53.170 07:50:58 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.170 07:50:58 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:53.170 07:50:58 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.170 07:50:58 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:53.170 07:50:58 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:53.170 07:50:58 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.170 07:50:58 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:53.170 07:50:58 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:53.170 07:50:58 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:53.170 07:50:58 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.170 07:50:58 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:53.170 07:50:58 -- setup/devices.sh@53 -- # local found=0 00:04:53.170 07:50:58 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:53.170 07:50:58 -- setup/devices.sh@56 -- # : 00:04:53.170 07:50:58 -- setup/devices.sh@59 -- # local pci status 00:04:53.170 07:50:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.170 07:50:58 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:53.170 07:50:58 -- setup/devices.sh@47 -- # setup output config 00:04:53.170 07:50:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.170 07:50:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:53.170 07:50:58 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:53.170 07:50:58 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:53.170 07:50:58 -- setup/devices.sh@63 -- # found=1 00:04:53.170 07:50:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.428 07:50:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:53.428 07:50:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.428 07:50:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:53.428 07:50:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.428 07:50:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.428 07:50:59 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:53.428 07:50:59 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.428 07:50:59 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:53.428 07:50:59 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:53.428 07:50:59 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.428 07:50:59 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:53.428 07:50:59 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:53.428 07:50:59 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:53.428 07:50:59 -- setup/devices.sh@50 -- # local mount_point= 00:04:53.428 07:50:59 -- setup/devices.sh@51 -- # local test_file= 00:04:53.428 07:50:59 -- setup/devices.sh@53 -- # local found=0 00:04:53.428 07:50:59 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:53.428 07:50:59 -- setup/devices.sh@59 -- # local pci status 00:04:53.428 07:50:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.428 07:50:59 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:53.428 07:50:59 -- setup/devices.sh@47 -- # setup output config 00:04:53.428 07:50:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.428 07:50:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:53.428 07:50:59 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:53.428 07:50:59 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:53.428 07:50:59 -- setup/devices.sh@63 -- # found=1 00:04:53.428 07:50:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.686 07:50:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:53.686 07:50:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.686 07:50:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:53.686 07:50:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.686 07:50:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.686 07:50:59 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:53.686 07:50:59 -- setup/devices.sh@68 -- # return 0 00:04:53.686 07:50:59 -- setup/devices.sh@187 -- # cleanup_dm 00:04:53.686 07:50:59 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.686 07:50:59 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:53.686 07:50:59 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:53.686 07:50:59 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:53.686 07:50:59 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:53.686 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:53.686 07:50:59 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:53.686 07:50:59 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:53.686 00:04:53.686 real 0m3.843s 00:04:53.686 user 0m0.261s 00:04:53.686 sys 0m0.506s 00:04:53.686 ************************************ 00:04:53.686 END TEST dm_mount 00:04:53.686 ************************************ 00:04:53.686 07:50:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.686 07:50:59 -- common/autotest_common.sh@10 -- # set +x 00:04:53.686 07:50:59 -- setup/devices.sh@1 -- # cleanup 00:04:53.686 07:50:59 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:53.686 07:50:59 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.686 07:50:59 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:53.686 07:50:59 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:53.686 07:50:59 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:53.686 07:50:59 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:53.686 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:53.686 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:53.686 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:53.686 /dev/nvme0n1: calling ioclt to re-read partition table: Success 00:04:53.686 07:50:59 -- setup/devices.sh@12 -- # cleanup_dm 00:04:53.686 07:50:59 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.686 07:50:59 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:53.686 07:50:59 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:53.686 07:50:59 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:53.686 07:50:59 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:53.686 07:50:59 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:53.686 00:04:53.686 real 0m7.954s 00:04:53.686 user 0m1.009s 00:04:53.686 sys 0m1.723s 00:04:53.686 ************************************ 00:04:53.686 END TEST devices 00:04:53.686 ************************************ 00:04:53.686 07:50:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.686 07:50:59 -- common/autotest_common.sh@10 -- # set +x 00:04:53.955 00:04:53.955 real 0m14.373s 00:04:53.956 user 0m3.736s 00:04:53.956 sys 0m5.654s 00:04:53.956 07:50:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.956 07:50:59 -- common/autotest_common.sh@10 -- # set +x 00:04:53.956 ************************************ 00:04:53.956 END TEST setup.sh 00:04:53.956 ************************************ 00:04:53.956 07:50:59 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:53.956 Hugepages 00:04:53.956 node hugesize free / total 00:04:53.956 node0 1048576kB 0 / 0 00:04:53.956 node0 2048kB 2048 / 2048 00:04:53.956 00:04:53.956 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:53.956 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:53.956 NVMe 0000:00:06.0 1b36 0010 0 nvme nvme0 nvme0n1 00:04:53.956 07:50:59 -- spdk/autotest.sh@141 -- # uname -s 00:04:53.956 07:50:59 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:53.956 07:50:59 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:53.956 07:50:59 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:54.214 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:54.472 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.472 07:51:00 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:55.407 07:51:01 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:55.407 07:51:01 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:55.407 07:51:01 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:55.407 07:51:01 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:55.407 07:51:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:55.407 07:51:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:55.407 07:51:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:55.407 07:51:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:55.407 07:51:01 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:55.407 07:51:01 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:55.407 07:51:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:04:55.407 07:51:01 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:55.665 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:55.665 Waiting for block devices as requested 00:04:55.665 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:55.923 07:51:01 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:55.924 07:51:01 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:55.924 07:51:01 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:04:55.924 07:51:01 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:55.924 07:51:01 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:55.924 07:51:01 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:04:55.924 07:51:01 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:55.924 07:51:01 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:55.924 07:51:01 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:04:55.924 07:51:01 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:04:55.924 07:51:01 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:04:55.924 07:51:01 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:55.924 07:51:01 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:55.924 07:51:01 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:55.924 07:51:01 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:55.924 07:51:01 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:55.924 07:51:01 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:55.924 07:51:01 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:55.924 07:51:01 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:55.924 07:51:01 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:55.924 07:51:01 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:55.924 07:51:01 -- common/autotest_common.sh@1542 -- # continue 00:04:55.924 07:51:01 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:55.924 07:51:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:55.924 07:51:01 -- common/autotest_common.sh@10 -- # set +x 00:04:55.924 07:51:01 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:55.924 07:51:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:55.924 07:51:01 -- common/autotest_common.sh@10 -- # set +x 00:04:55.924 07:51:01 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:56.182 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:56.182 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:56.442 07:51:01 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:56.442 07:51:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:56.442 07:51:01 -- common/autotest_common.sh@10 -- # set +x 00:04:56.442 07:51:02 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:56.442 07:51:02 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:56.442 07:51:02 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:56.442 07:51:02 -- common/autotest_common.sh@1562 -- # bdfs=() 00:04:56.442 07:51:02 -- common/autotest_common.sh@1562 -- # local bdfs 00:04:56.442 07:51:02 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:56.442 07:51:02 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:56.442 07:51:02 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:56.442 07:51:02 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:56.442 07:51:02 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:56.442 07:51:02 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:56.442 07:51:02 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:56.442 07:51:02 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:04:56.442 07:51:02 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:56.442 07:51:02 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:56.442 07:51:02 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:56.442 07:51:02 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:56.442 07:51:02 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:04:56.442 07:51:02 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:56.442 07:51:02 -- common/autotest_common.sh@1578 -- # return 0 00:04:56.442 07:51:02 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:04:56.442 07:51:02 -- spdk/autotest.sh@162 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:56.442 07:51:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.442 07:51:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.442 07:51:02 -- common/autotest_common.sh@10 -- # set +x 00:04:56.442 ************************************ 00:04:56.442 START TEST unittest 00:04:56.442 ************************************ 00:04:56.442 07:51:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:56.442 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:56.442 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:04:56.442 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:04:56.442 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:56.442 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:04:56.442 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:56.442 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:04:56.442 ++ rpc_py=rpc_cmd 00:04:56.442 ++ set -e 00:04:56.442 ++ shopt -s nullglob 00:04:56.442 ++ shopt -s extglob 00:04:56.442 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:04:56.442 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:04:56.442 +++ CONFIG_RDMA=y 00:04:56.442 +++ CONFIG_UNIT_TESTS=y 00:04:56.442 +++ CONFIG_GOLANG=n 00:04:56.442 +++ CONFIG_FUSE=n 00:04:56.442 +++ CONFIG_ISAL=n 00:04:56.442 +++ CONFIG_VTUNE_DIR= 00:04:56.442 +++ CONFIG_CUSTOMOCF=n 00:04:56.442 +++ CONFIG_IPSEC_MB_DIR= 00:04:56.442 +++ CONFIG_VBDEV_COMPRESS=n 00:04:56.442 +++ CONFIG_OCF_PATH= 00:04:56.442 +++ CONFIG_SHARED=n 00:04:56.442 +++ CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:04:56.442 +++ CONFIG_TESTS=y 00:04:56.442 +++ CONFIG_APPS=y 00:04:56.442 +++ CONFIG_ISAL_CRYPTO=n 00:04:56.442 +++ CONFIG_LIBDIR= 00:04:56.442 +++ CONFIG_DPDK_COMPRESSDEV=n 00:04:56.442 +++ CONFIG_DAOS_DIR= 00:04:56.442 +++ CONFIG_ISCSI_INITIATOR=n 00:04:56.442 +++ CONFIG_DPDK_PKG_CONFIG=n 00:04:56.442 +++ CONFIG_ASAN=y 00:04:56.442 +++ CONFIG_LTO=n 00:04:56.442 +++ CONFIG_CET=n 00:04:56.442 +++ CONFIG_FUZZER=n 00:04:56.442 +++ CONFIG_USDT=n 00:04:56.442 +++ CONFIG_VTUNE=n 00:04:56.442 +++ CONFIG_VHOST=y 00:04:56.442 +++ CONFIG_WPDK_DIR= 00:04:56.442 +++ CONFIG_UBLK=n 00:04:56.442 +++ CONFIG_URING=n 00:04:56.442 +++ CONFIG_SMA=n 00:04:56.442 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:04:56.442 +++ CONFIG_IDXD_KERNEL=n 00:04:56.442 +++ CONFIG_FC_PATH= 00:04:56.442 +++ CONFIG_PREFIX=/usr/local 00:04:56.442 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:04:56.442 +++ CONFIG_XNVME=n 00:04:56.442 +++ CONFIG_RDMA_PROV=verbs 00:04:56.442 +++ CONFIG_RDMA_SET_TOS=y 00:04:56.442 +++ CONFIG_FUZZER_LIB= 00:04:56.442 +++ CONFIG_HAVE_LIBARCHIVE=n 00:04:56.442 +++ CONFIG_ARCH=native 00:04:56.442 +++ CONFIG_PGO_CAPTURE=n 00:04:56.442 +++ CONFIG_DAOS=y 00:04:56.442 +++ CONFIG_WERROR=y 00:04:56.442 +++ CONFIG_DEBUG=y 00:04:56.442 +++ CONFIG_AVAHI=n 00:04:56.442 +++ CONFIG_CROSS_PREFIX= 00:04:56.442 +++ CONFIG_PGO_USE=n 00:04:56.442 +++ CONFIG_CRYPTO=n 00:04:56.442 +++ CONFIG_HAVE_ARC4RANDOM=n 00:04:56.442 +++ CONFIG_OPENSSL_PATH= 00:04:56.442 +++ CONFIG_EXAMPLES=y 00:04:56.442 +++ CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:04:56.442 +++ CONFIG_MAX_LCORES= 00:04:56.442 +++ CONFIG_VIRTIO=y 00:04:56.442 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:56.442 +++ CONFIG_IPSEC_MB=n 00:04:56.442 +++ CONFIG_UBSAN=n 00:04:56.442 +++ CONFIG_HAVE_EXECINFO_H=y 00:04:56.442 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:04:56.442 +++ CONFIG_HAVE_LIBBSD=n 00:04:56.442 +++ CONFIG_URING_PATH= 00:04:56.442 +++ CONFIG_NVME_CUSE=y 00:04:56.442 +++ CONFIG_URING_ZNS=n 00:04:56.442 +++ CONFIG_VFIO_USER=n 00:04:56.442 +++ CONFIG_FC=n 00:04:56.442 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:04:56.442 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:04:56.442 +++ CONFIG_RBD=n 00:04:56.442 +++ CONFIG_RAID5F=n 00:04:56.442 +++ CONFIG_VFIO_USER_DIR= 00:04:56.442 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:04:56.442 +++ CONFIG_TSAN=n 00:04:56.442 +++ CONFIG_IDXD=y 00:04:56.442 +++ CONFIG_OCF=n 00:04:56.442 +++ CONFIG_CRYPTO_MLX5=n 00:04:56.442 +++ CONFIG_FIO_PLUGIN=y 00:04:56.442 +++ CONFIG_COVERAGE=y 00:04:56.442 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:56.442 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:56.442 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:04:56.442 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:04:56.442 +++ _root=/home/vagrant/spdk_repo/spdk 00:04:56.442 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:04:56.442 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:04:56.442 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:04:56.442 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:04:56.442 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:04:56.442 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:04:56.442 +++ VHOST_APP=("$_app_dir/vhost") 00:04:56.442 +++ DD_APP=("$_app_dir/spdk_dd") 00:04:56.442 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:04:56.442 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:04:56.442 +++ [[ #ifndef SPDK_CONFIG_H 00:04:56.442 #define SPDK_CONFIG_H 00:04:56.442 #define SPDK_CONFIG_APPS 1 00:04:56.442 #define SPDK_CONFIG_ARCH native 00:04:56.442 #define SPDK_CONFIG_ASAN 1 00:04:56.442 #undef SPDK_CONFIG_AVAHI 00:04:56.442 #undef SPDK_CONFIG_CET 00:04:56.442 #define SPDK_CONFIG_COVERAGE 1 00:04:56.442 #define SPDK_CONFIG_CROSS_PREFIX 00:04:56.442 #undef SPDK_CONFIG_CRYPTO 00:04:56.442 #undef SPDK_CONFIG_CRYPTO_MLX5 00:04:56.442 #undef SPDK_CONFIG_CUSTOMOCF 00:04:56.442 #define SPDK_CONFIG_DAOS 1 00:04:56.442 #define SPDK_CONFIG_DAOS_DIR 00:04:56.442 #define SPDK_CONFIG_DEBUG 1 00:04:56.442 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:04:56.442 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:04:56.442 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:04:56.442 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:04:56.443 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:04:56.443 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:56.443 #define SPDK_CONFIG_EXAMPLES 1 00:04:56.443 #undef SPDK_CONFIG_FC 00:04:56.443 #define SPDK_CONFIG_FC_PATH 00:04:56.443 #define SPDK_CONFIG_FIO_PLUGIN 1 00:04:56.443 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:04:56.443 #undef SPDK_CONFIG_FUSE 00:04:56.443 #undef SPDK_CONFIG_FUZZER 00:04:56.443 #define SPDK_CONFIG_FUZZER_LIB 00:04:56.443 #undef SPDK_CONFIG_GOLANG 00:04:56.443 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:04:56.443 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:04:56.443 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:04:56.443 #undef SPDK_CONFIG_HAVE_LIBBSD 00:04:56.443 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:04:56.443 #define SPDK_CONFIG_IDXD 1 00:04:56.443 #undef SPDK_CONFIG_IDXD_KERNEL 00:04:56.443 #undef SPDK_CONFIG_IPSEC_MB 00:04:56.443 #define SPDK_CONFIG_IPSEC_MB_DIR 00:04:56.443 #undef SPDK_CONFIG_ISAL 00:04:56.443 #undef SPDK_CONFIG_ISAL_CRYPTO 00:04:56.443 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:04:56.443 #define SPDK_CONFIG_LIBDIR 00:04:56.443 #undef SPDK_CONFIG_LTO 00:04:56.443 #define SPDK_CONFIG_MAX_LCORES 00:04:56.443 #define SPDK_CONFIG_NVME_CUSE 1 00:04:56.443 #undef SPDK_CONFIG_OCF 00:04:56.443 #define SPDK_CONFIG_OCF_PATH 00:04:56.443 #define SPDK_CONFIG_OPENSSL_PATH 00:04:56.443 #undef SPDK_CONFIG_PGO_CAPTURE 00:04:56.443 #undef SPDK_CONFIG_PGO_USE 00:04:56.443 #define SPDK_CONFIG_PREFIX /usr/local 00:04:56.443 #undef SPDK_CONFIG_RAID5F 00:04:56.443 #undef SPDK_CONFIG_RBD 00:04:56.443 #define SPDK_CONFIG_RDMA 1 00:04:56.443 #define SPDK_CONFIG_RDMA_PROV verbs 00:04:56.443 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:04:56.443 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:04:56.443 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:04:56.443 #undef SPDK_CONFIG_SHARED 00:04:56.443 #undef SPDK_CONFIG_SMA 00:04:56.443 #define SPDK_CONFIG_TESTS 1 00:04:56.443 #undef SPDK_CONFIG_TSAN 00:04:56.443 #undef SPDK_CONFIG_UBLK 00:04:56.443 #undef SPDK_CONFIG_UBSAN 00:04:56.443 #define SPDK_CONFIG_UNIT_TESTS 1 00:04:56.443 #undef SPDK_CONFIG_URING 00:04:56.443 #define SPDK_CONFIG_URING_PATH 00:04:56.443 #undef SPDK_CONFIG_URING_ZNS 00:04:56.443 #undef SPDK_CONFIG_USDT 00:04:56.443 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:04:56.443 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:04:56.443 #undef SPDK_CONFIG_VFIO_USER 00:04:56.443 #define SPDK_CONFIG_VFIO_USER_DIR 00:04:56.443 #define SPDK_CONFIG_VHOST 1 00:04:56.443 #define SPDK_CONFIG_VIRTIO 1 00:04:56.443 #undef SPDK_CONFIG_VTUNE 00:04:56.443 #define SPDK_CONFIG_VTUNE_DIR 00:04:56.443 #define SPDK_CONFIG_WERROR 1 00:04:56.443 #define SPDK_CONFIG_WPDK_DIR 00:04:56.443 #undef SPDK_CONFIG_XNVME 00:04:56.443 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:04:56.443 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:04:56.443 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:56.443 +++ [[ -e /bin/wpdk_common.sh ]] 00:04:56.443 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.443 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.443 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:04:56.443 ++++ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:04:56.443 ++++ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:04:56.443 ++++ export PATH 00:04:56.443 ++++ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:04:56.443 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:56.443 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:56.443 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:56.443 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:56.443 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:04:56.443 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:04:56.443 +++ TEST_TAG=N/A 00:04:56.443 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:04:56.443 ++ : 1 00:04:56.443 ++ export RUN_NIGHTLY 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_RUN_VALGRIND 00:04:56.443 ++ : 1 00:04:56.443 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:04:56.443 ++ : 1 00:04:56.443 ++ export SPDK_TEST_UNITTEST 00:04:56.443 ++ : 00:04:56.443 ++ export SPDK_TEST_AUTOBUILD 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_RELEASE_BUILD 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_ISAL 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_ISCSI 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_ISCSI_INITIATOR 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_NVME 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_NVME_PMR 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_NVME_BP 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_NVME_CLI 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_NVME_CUSE 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_NVME_FDP 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_NVMF 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_VFIOUSER 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_VFIOUSER_QEMU 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_FUZZER 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_FUZZER_SHORT 00:04:56.443 ++ : rdma 00:04:56.443 ++ export SPDK_TEST_NVMF_TRANSPORT 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_RBD 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_VHOST 00:04:56.443 ++ : 1 00:04:56.443 ++ export SPDK_TEST_BLOCKDEV 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_IOAT 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_BLOBFS 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_VHOST_INIT 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_LVOL 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_VBDEV_COMPRESS 00:04:56.443 ++ : 1 00:04:56.443 ++ export SPDK_RUN_ASAN 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_RUN_UBSAN 00:04:56.443 ++ : /home/vagrant/spdk_repo/dpdk/build 00:04:56.443 ++ export SPDK_RUN_EXTERNAL_DPDK 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_RUN_NON_ROOT 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_CRYPTO 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_FTL 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_OCF 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_VMD 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_OPAL 00:04:56.443 ++ : v22.11.4 00:04:56.443 ++ export SPDK_TEST_NATIVE_DPDK 00:04:56.443 ++ : true 00:04:56.443 ++ export SPDK_AUTOTEST_X 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_RAID5 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_URING 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_USDT 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_USE_IGB_UIO 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_SCHEDULER 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_SCANBUILD 00:04:56.443 ++ : 00:04:56.443 ++ export SPDK_TEST_NVMF_NICS 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_SMA 00:04:56.443 ++ : 1 00:04:56.443 ++ export SPDK_TEST_DAOS 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_XNVME 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_ACCEL_DSA 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_ACCEL_IAA 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_ACCEL_IOAT 00:04:56.443 ++ : 00:04:56.443 ++ export SPDK_TEST_FUZZER_TARGET 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_TEST_NVMF_MDNS 00:04:56.443 ++ : 0 00:04:56.443 ++ export SPDK_JSONRPC_GO_CLIENT 00:04:56.443 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:56.443 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:56.443 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:04:56.443 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:04:56.443 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:56.443 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:56.443 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:56.443 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:56.443 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:04:56.443 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:04:56.443 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:56.443 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:56.443 ++ export PYTHONDONTWRITEBYTECODE=1 00:04:56.443 ++ PYTHONDONTWRITEBYTECODE=1 00:04:56.443 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:56.443 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:56.443 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:56.443 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:56.443 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:04:56.443 ++ rm -rf /var/tmp/asan_suppression_file 00:04:56.443 ++ cat 00:04:56.443 ++ echo leak:libfuse3.so 00:04:56.444 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:56.444 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:56.444 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:56.444 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:56.444 ++ '[' -z /var/spdk/dependencies ']' 00:04:56.444 ++ export DEPENDENCY_DIR 00:04:56.444 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:56.444 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:56.444 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:56.444 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:56.444 ++ export QEMU_BIN= 00:04:56.444 ++ QEMU_BIN= 00:04:56.444 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:56.444 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:56.444 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:56.444 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:56.444 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:56.444 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:56.444 ++ '[' 0 -eq 0 ']' 00:04:56.444 ++ export valgrind= 00:04:56.444 ++ valgrind= 00:04:56.444 +++ uname -s 00:04:56.444 ++ '[' Linux = Linux ']' 00:04:56.444 ++ HUGEMEM=4096 00:04:56.444 ++ export CLEAR_HUGE=yes 00:04:56.444 ++ CLEAR_HUGE=yes 00:04:56.444 ++ [[ 0 -eq 1 ]] 00:04:56.444 ++ [[ 0 -eq 1 ]] 00:04:56.444 ++ MAKE=make 00:04:56.444 +++ nproc 00:04:56.444 ++ MAKEFLAGS=-j10 00:04:56.444 ++ export HUGEMEM=4096 00:04:56.444 ++ HUGEMEM=4096 00:04:56.444 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:04:56.444 ++ NO_HUGE=() 00:04:56.444 ++ TEST_MODE= 00:04:56.444 ++ [[ -z '' ]] 00:04:56.444 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:56.444 ++ exec 00:04:56.444 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:56.444 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:04:56.444 ++ set_test_storage 2147483648 00:04:56.444 ++ [[ -v testdir ]] 00:04:56.444 ++ local requested_size=2147483648 00:04:56.444 ++ local mount target_dir 00:04:56.444 ++ local -A mounts fss sizes avails uses 00:04:56.444 ++ local source fs size avail mount use 00:04:56.444 ++ local storage_fallback storage_candidates 00:04:56.444 +++ mktemp -udt spdk.XXXXXX 00:04:56.444 ++ storage_fallback=/tmp/spdk.WVylOn 00:04:56.444 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:04:56.444 ++ [[ -n '' ]] 00:04:56.444 ++ [[ -n '' ]] 00:04:56.444 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.WVylOn/tests/unit /tmp/spdk.WVylOn 00:04:56.444 ++ requested_size=2214592512 00:04:56.444 ++ read -r source fs size use avail _ mount 00:04:56.444 +++ grep -v Filesystem 00:04:56.444 +++ df -T 00:04:56.444 ++ mounts["$mount"]=devtmpfs 00:04:56.444 ++ fss["$mount"]=devtmpfs 00:04:56.444 ++ avails["$mount"]=6267637760 00:04:56.444 ++ sizes["$mount"]=6267637760 00:04:56.444 ++ uses["$mount"]=0 00:04:56.444 ++ read -r source fs size use avail _ mount 00:04:56.444 ++ mounts["$mount"]=tmpfs 00:04:56.444 ++ fss["$mount"]=tmpfs 00:04:56.444 ++ avails["$mount"]=6298185728 00:04:56.444 ++ sizes["$mount"]=6298185728 00:04:56.444 ++ uses["$mount"]=0 00:04:56.444 ++ read -r source fs size use avail _ mount 00:04:56.444 ++ mounts["$mount"]=tmpfs 00:04:56.444 ++ fss["$mount"]=tmpfs 00:04:56.444 ++ avails["$mount"]=6280884224 00:04:56.444 ++ sizes["$mount"]=6298185728 00:04:56.444 ++ uses["$mount"]=17301504 00:04:56.444 ++ read -r source fs size use avail _ mount 00:04:56.444 ++ mounts["$mount"]=tmpfs 00:04:56.444 ++ fss["$mount"]=tmpfs 00:04:56.444 ++ avails["$mount"]=6298185728 00:04:56.444 ++ sizes["$mount"]=6298185728 00:04:56.444 ++ uses["$mount"]=0 00:04:56.444 ++ read -r source fs size use avail _ mount 00:04:56.444 ++ mounts["$mount"]=/dev/vda1 00:04:56.444 ++ fss["$mount"]=xfs 00:04:56.444 ++ avails["$mount"]=12945281024 00:04:56.444 ++ sizes["$mount"]=21463302144 00:04:56.444 ++ uses["$mount"]=8518021120 00:04:56.444 ++ read -r source fs size use avail _ mount 00:04:56.444 ++ mounts["$mount"]=tmpfs 00:04:56.444 ++ fss["$mount"]=tmpfs 00:04:56.444 ++ avails["$mount"]=1259638784 00:04:56.444 ++ sizes["$mount"]=1259638784 00:04:56.444 ++ uses["$mount"]=0 00:04:56.444 ++ read -r source fs size use avail _ mount 00:04:56.444 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/centos7-libvirt/output 00:04:56.444 ++ fss["$mount"]=fuse.sshfs 00:04:56.444 ++ avails["$mount"]=95072387072 00:04:56.444 ++ sizes["$mount"]=105088212992 00:04:56.444 ++ uses["$mount"]=4630392832 00:04:56.444 ++ read -r source fs size use avail _ mount 00:04:56.444 ++ printf '* Looking for test storage...\n' 00:04:56.444 * Looking for test storage... 00:04:56.444 ++ local target_space new_size 00:04:56.444 ++ for target_dir in "${storage_candidates[@]}" 00:04:56.444 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:04:56.444 +++ awk '$1 !~ /Filesystem/{print $6}' 00:04:56.444 ++ mount=/ 00:04:56.444 ++ target_space=12945281024 00:04:56.444 ++ (( target_space == 0 || target_space < requested_size )) 00:04:56.444 ++ (( target_space >= requested_size )) 00:04:56.444 ++ [[ xfs == tmpfs ]] 00:04:56.444 ++ [[ xfs == ramfs ]] 00:04:56.444 ++ [[ / == / ]] 00:04:56.444 ++ new_size=10732613632 00:04:56.444 ++ (( new_size * 100 / sizes[/] > 95 )) 00:04:56.444 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:56.444 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:56.444 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:04:56.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:04:56.444 ++ return 0 00:04:56.444 ++ set -o errtrace 00:04:56.444 ++ shopt -s extdebug 00:04:56.444 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:04:56.444 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:04:56.444 07:51:02 -- common/autotest_common.sh@1672 -- # true 00:04:56.444 07:51:02 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:04:56.444 07:51:02 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:04:56.444 07:51:02 -- common/autotest_common.sh@29 -- # exec 00:04:56.444 07:51:02 -- common/autotest_common.sh@31 -- # xtrace_restore 00:04:56.444 07:51:02 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:04:56.444 07:51:02 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:04:56.444 07:51:02 -- common/autotest_common.sh@18 -- # set -x 00:04:56.444 07:51:02 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:04:56.444 07:51:02 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:04:56.444 07:51:02 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:04:56.444 07:51:02 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:04:56.444 07:51:02 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:04:56.444 07:51:02 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:04:56.444 07:51:02 -- unit/unittest.sh@179 -- # hash lcov 00:04:56.444 07:51:02 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:56.444 07:51:02 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:56.444 07:51:02 -- unit/unittest.sh@180 -- # cov_avail=yes 00:04:56.444 07:51:02 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:04:56.444 07:51:02 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:04:56.444 07:51:02 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:56.444 07:51:02 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:56.444 07:51:02 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:04:56.444 --rc lcov_branch_coverage=1 00:04:56.444 --rc lcov_function_coverage=1 00:04:56.444 --rc genhtml_branch_coverage=1 00:04:56.444 --rc genhtml_function_coverage=1 00:04:56.444 --rc genhtml_legend=1 00:04:56.444 --rc geninfo_all_blocks=1 00:04:56.444 ' 00:04:56.444 07:51:02 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:04:56.444 --rc lcov_branch_coverage=1 00:04:56.444 --rc lcov_function_coverage=1 00:04:56.444 --rc genhtml_branch_coverage=1 00:04:56.444 --rc genhtml_function_coverage=1 00:04:56.444 --rc genhtml_legend=1 00:04:56.444 --rc geninfo_all_blocks=1 00:04:56.444 ' 00:04:56.444 07:51:02 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:04:56.444 --rc lcov_branch_coverage=1 00:04:56.444 --rc lcov_function_coverage=1 00:04:56.444 --rc genhtml_branch_coverage=1 00:04:56.444 --rc genhtml_function_coverage=1 00:04:56.444 --rc genhtml_legend=1 00:04:56.444 --rc geninfo_all_blocks=1 00:04:56.444 --no-external' 00:04:56.444 07:51:02 -- unit/unittest.sh@200 -- # LCOV='lcov 00:04:56.444 --rc lcov_branch_coverage=1 00:04:56.444 --rc lcov_function_coverage=1 00:04:56.444 --rc genhtml_branch_coverage=1 00:04:56.444 --rc genhtml_function_coverage=1 00:04:56.444 --rc genhtml_legend=1 00:04:56.444 --rc geninfo_all_blocks=1 00:04:56.444 --no-external' 00:04:56.444 07:51:02 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:05:04.556 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:04.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:04.556 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:04.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:04.556 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:04.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:19.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:19.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:19.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:19.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:19.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:19.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:19.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:19.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:19.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:19.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:19.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:19.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:19.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:19.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:19.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:19.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:19.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:19.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:19.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:19.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:19.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:19.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:19.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:19.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:19.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:19.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:19.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:19.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:19.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:19.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:19.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:19.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:19.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:19.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:19.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:19.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:19.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:19.708 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:19.708 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:19.967 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:19.967 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:19.967 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:19.967 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:19.967 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:19.967 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:19.967 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:19.967 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:58.676 07:52:02 -- unit/unittest.sh@206 -- # uname -m 00:05:58.676 07:52:02 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:05:58.676 07:52:02 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:58.676 07:52:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:58.676 07:52:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.676 07:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.676 ************************************ 00:05:58.676 START TEST unittest_pci_event 00:05:58.676 ************************************ 00:05:58.676 07:52:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:58.676 00:05:58.676 00:05:58.676 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.676 http://cunit.sourceforge.net/ 00:05:58.676 00:05:58.676 00:05:58.676 Suite: pci_event 00:05:58.676 Test: test_pci_parse_event ...passed 00:05:58.676 00:05:58.676 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.676 suites 1 1 n/a 0 0 00:05:58.676 tests 1 1 1 0 0 00:05:58.676 asserts 15 15 15 0 n/a 00:05:58.676 00:05:58.676 Elapsed time = 0.000 seconds 00:05:58.676 [2024-07-13 07:52:02.686371] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:05:58.676 [2024-07-13 07:52:02.686697] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:05:58.676 ************************************ 00:05:58.676 END TEST unittest_pci_event 00:05:58.676 ************************************ 00:05:58.676 00:05:58.676 real 0m0.023s 00:05:58.676 user 0m0.011s 00:05:58.676 sys 0m0.012s 00:05:58.676 07:52:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.676 07:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.676 07:52:02 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:58.676 07:52:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:58.676 07:52:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.676 07:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.676 ************************************ 00:05:58.676 START TEST unittest_include 00:05:58.676 ************************************ 00:05:58.676 07:52:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:58.676 00:05:58.676 00:05:58.676 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.676 http://cunit.sourceforge.net/ 00:05:58.676 00:05:58.676 00:05:58.676 Suite: histogram 00:05:58.676 Test: histogram_test ...passed 00:05:58.676 Test: histogram_merge ...passed 00:05:58.676 00:05:58.676 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.676 suites 1 1 n/a 0 0 00:05:58.676 tests 2 2 2 0 0 00:05:58.676 asserts 50 50 50 0 n/a 00:05:58.676 00:05:58.676 Elapsed time = 0.000 seconds 00:05:58.676 ************************************ 00:05:58.676 END TEST unittest_include 00:05:58.676 ************************************ 00:05:58.676 00:05:58.676 real 0m0.028s 00:05:58.676 user 0m0.014s 00:05:58.676 sys 0m0.014s 00:05:58.676 07:52:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.676 07:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.676 07:52:02 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:05:58.676 07:52:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:58.676 07:52:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.676 07:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.676 ************************************ 00:05:58.676 START TEST unittest_bdev 00:05:58.676 ************************************ 00:05:58.676 07:52:02 -- common/autotest_common.sh@1104 -- # unittest_bdev 00:05:58.676 07:52:02 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:05:58.676 00:05:58.676 00:05:58.676 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.676 http://cunit.sourceforge.net/ 00:05:58.676 00:05:58.676 00:05:58.676 Suite: bdev 00:05:58.676 Test: bytes_to_blocks_test ...passed 00:05:58.676 Test: num_blocks_test ...passed 00:05:58.676 Test: io_valid_test ...passed 00:05:58.676 Test: open_write_test ...[2024-07-13 07:52:02.906149] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:05:58.676 [2024-07-13 07:52:02.906381] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:05:58.676 [2024-07-13 07:52:02.906641] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:05:58.676 passed 00:05:58.676 Test: claim_test ...passed 00:05:58.677 Test: alias_add_del_test ...[2024-07-13 07:52:03.015266] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:05:58.677 [2024-07-13 07:52:03.015369] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:05:58.677 [2024-07-13 07:52:03.015408] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:05:58.677 passed 00:05:58.677 Test: get_device_stat_test ...passed 00:05:58.677 Test: bdev_io_types_test ...passed 00:05:58.677 Test: bdev_io_wait_test ...passed 00:05:58.677 Test: bdev_io_spans_split_test ...passed 00:05:58.677 Test: bdev_io_boundary_split_test ...passed 00:05:58.677 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-13 07:52:03.232557] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:05:58.677 passed 00:05:58.677 Test: bdev_io_mix_split_test ...passed 00:05:58.677 Test: bdev_io_split_with_io_wait ...passed 00:05:58.677 Test: bdev_io_write_unit_split_test ...[2024-07-13 07:52:03.413364] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:58.677 [2024-07-13 07:52:03.413452] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:58.677 [2024-07-13 07:52:03.413850] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:05:58.677 [2024-07-13 07:52:03.413925] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:05:58.677 passed 00:05:58.677 Test: bdev_io_alignment_with_boundary ...passed 00:05:58.677 Test: bdev_io_alignment ...passed 00:05:58.677 Test: bdev_histograms ...passed 00:05:58.677 Test: bdev_write_zeroes ...passed 00:05:58.677 Test: bdev_compare_and_write ...passed 00:05:58.677 Test: bdev_compare ...passed 00:05:58.677 Test: bdev_compare_emulated ...passed 00:05:58.677 Test: bdev_zcopy_write ...passed 00:05:58.677 Test: bdev_zcopy_read ...passed 00:05:58.677 Test: bdev_open_while_hotremove ...passed 00:05:58.677 Test: bdev_close_while_hotremove ...passed 00:05:58.677 Test: bdev_open_ext_test ...passed 00:05:58.677 Test: bdev_open_ext_unregister ...passed 00:05:58.677 Test: bdev_set_io_timeout ...[2024-07-13 07:52:03.945133] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:58.677 [2024-07-13 07:52:03.945277] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:58.677 passed 00:05:58.677 Test: bdev_set_qd_sampling ...passed 00:05:58.677 Test: lba_range_overlap ...passed 00:05:58.677 Test: lock_lba_range_check_ranges ...passed 00:05:58.677 Test: lock_lba_range_with_io_outstanding ...passed 00:05:58.677 Test: lock_lba_range_overlapped ...passed 00:05:58.677 Test: bdev_quiesce ...[2024-07-13 07:52:04.228237] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:05:58.677 passed 00:05:58.677 Test: bdev_io_abort ...passed 00:05:58.677 Test: bdev_unmap ...passed 00:05:58.677 Test: bdev_write_zeroes_split_test ...passed 00:05:58.677 Test: bdev_set_options_test ...passed 00:05:58.677 Test: bdev_get_memory_domains ...passed 00:05:58.677 Test: bdev_io_ext ...[2024-07-13 07:52:04.363670] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:05:58.677 passed 00:05:58.677 Test: bdev_io_ext_no_opts ...passed 00:05:58.935 Test: bdev_io_ext_invalid_opts ...passed 00:05:58.935 Test: bdev_io_ext_split ...passed 00:05:58.935 Test: bdev_io_ext_bounce_buffer ...passed 00:05:58.935 Test: bdev_register_uuid_alias ...[2024-07-13 07:52:04.638978] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 21646cc5-0ed6-48b4-9949-f6cd8a7f1a79 already exists 00:05:58.935 [2024-07-13 07:52:04.639048] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:21646cc5-0ed6-48b4-9949-f6cd8a7f1a79 alias for bdev bdev0 00:05:58.935 passed 00:05:58.935 Test: bdev_unregister_by_name ...passed 00:05:58.935 Test: for_each_bdev_test ...passed 00:05:58.935 Test: bdev_seek_test ...[2024-07-13 07:52:04.660688] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:05:58.935 [2024-07-13 07:52:04.660743] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:05:58.935 passed 00:05:58.935 Test: bdev_copy ...passed 00:05:59.195 Test: bdev_copy_split_test ...passed 00:05:59.195 Test: examine_locks ...passed 00:05:59.195 Test: claim_v2_rwo ...passed 00:05:59.195 Test: claim_v2_rom ...[2024-07-13 07:52:04.783995] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:59.195 [2024-07-13 07:52:04.784056] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:59.195 [2024-07-13 07:52:04.784072] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:59.195 [2024-07-13 07:52:04.784122] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:59.195 [2024-07-13 07:52:04.784138] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:59.195 [2024-07-13 07:52:04.784173] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:05:59.195 [2024-07-13 07:52:04.784267] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:59.195 [2024-07-13 07:52:04.784317] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:59.195 [2024-07-13 07:52:04.784336] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:59.195 [2024-07-13 07:52:04.784357] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:59.195 [2024-07-13 07:52:04.784382] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:05:59.195 [2024-07-13 07:52:04.784420] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:59.195 passed 00:05:59.195 Test: claim_v2_rwm ...passed 00:05:59.195 Test: claim_v2_existing_writer ...passed 00:05:59.195 Test: claim_v2_existing_v1 ...passed 00:05:59.195 Test: claim_v1_existing_v2 ...passed 00:05:59.195 Test: examine_claimed ...passed 00:05:59.195 00:05:59.195 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.195 suites 1 1 n/a 0 0 00:05:59.195 tests 59 59 59 0 0 00:05:59.195 asserts 4599 4599 4599 0 n/a 00:05:59.195 00:05:59.195 Elapsed time = 1.940 seconds 00:05:59.195 [2024-07-13 07:52:04.784776] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:59.195 [2024-07-13 07:52:04.784833] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:59.195 [2024-07-13 07:52:04.784856] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:59.195 [2024-07-13 07:52:04.784879] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:59.195 [2024-07-13 07:52:04.784898] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:59.195 [2024-07-13 07:52:04.784924] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:05:59.195 [2024-07-13 07:52:04.784952] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:59.195 [2024-07-13 07:52:04.785052] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:59.195 [2024-07-13 07:52:04.785079] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:59.195 [2024-07-13 07:52:04.785159] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:59.195 [2024-07-13 07:52:04.785185] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:59.195 [2024-07-13 07:52:04.785204] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:59.195 [2024-07-13 07:52:04.785283] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:59.195 [2024-07-13 07:52:04.785319] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:59.195 [2024-07-13 07:52:04.785347] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:59.195 [2024-07-13 07:52:04.785553] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:05:59.195 07:52:04 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:05:59.195 00:05:59.195 00:05:59.195 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.195 http://cunit.sourceforge.net/ 00:05:59.195 00:05:59.195 00:05:59.195 Suite: nvme 00:05:59.195 Test: test_create_ctrlr ...passed 00:05:59.195 Test: test_reset_ctrlr ...passed 00:05:59.195 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:05:59.195 Test: test_failover_ctrlr ...[2024-07-13 07:52:04.823933] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:59.195 passed 00:05:59.195 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-13 07:52:04.825190] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:59.195 [2024-07-13 07:52:04.825318] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:59.195 passed 00:05:59.195 Test: test_pending_reset ...[2024-07-13 07:52:04.825477] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:59.195 [2024-07-13 07:52:04.826326] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:59.195 [2024-07-13 07:52:04.826442] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:59.195 passed 00:05:59.195 Test: test_attach_ctrlr ...[2024-07-13 07:52:04.827020] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:05:59.195 passed 00:05:59.195 Test: test_aer_cb ...passed 00:05:59.195 Test: test_submit_nvme_cmd ...passed 00:05:59.195 Test: test_add_remove_trid ...passed 00:05:59.195 Test: test_abort ...passed 00:05:59.195 Test: test_get_io_qpair ...passed 00:05:59.195 Test: test_bdev_unregister ...[2024-07-13 07:52:04.828966] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7227:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:05:59.195 passed 00:05:59.195 Test: test_compare_ns ...passed 00:05:59.195 Test: test_init_ana_log_page ...passed 00:05:59.195 Test: test_get_memory_domains ...passed 00:05:59.195 Test: test_reconnect_qpair ...passed 00:05:59.195 Test: test_create_bdev_ctrlr ...[2024-07-13 07:52:04.830353] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:59.195 [2024-07-13 07:52:04.830676] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5279:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:05:59.195 passed 00:05:59.195 Test: test_add_multi_ns_to_bdev ...[2024-07-13 07:52:04.831445] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4492:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:05:59.195 passed 00:05:59.195 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:05:59.195 Test: test_admin_path ...passed 00:05:59.195 Test: test_reset_bdev_ctrlr ...passed 00:05:59.195 Test: test_find_io_path ...passed 00:05:59.195 Test: test_retry_io_if_ana_state_is_updating ...passed 00:05:59.195 Test: test_retry_io_for_io_path_error ...passed 00:05:59.195 Test: test_retry_io_count ...passed 00:05:59.195 Test: test_concurrent_read_ana_log_page ...passed 00:05:59.195 Test: test_retry_io_for_ana_error ...passed 00:05:59.195 Test: test_check_io_error_resiliency_params ...passed 00:05:59.195 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:05:59.195 Test: test_reconnect_ctrlr ...[2024-07-13 07:52:04.835107] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5932:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:05:59.195 [2024-07-13 07:52:04.835185] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5936:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:59.196 [2024-07-13 07:52:04.835224] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5945:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:59.196 [2024-07-13 07:52:04.835275] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5948:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:05:59.196 [2024-07-13 07:52:04.835300] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:59.196 [2024-07-13 07:52:04.835333] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:59.196 [2024-07-13 07:52:04.835358] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5940:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:05:59.196 [2024-07-13 07:52:04.835403] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5955:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:05:59.196 [2024-07-13 07:52:04.835440] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5952:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:05:59.196 [2024-07-13 07:52:04.835869] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:59.196 [2024-07-13 07:52:04.835960] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:59.196 [2024-07-13 07:52:04.836094] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:59.196 [2024-07-13 07:52:04.836194] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:59.196 passed 00:05:59.196 Test: test_retry_failover_ctrlr ...[2024-07-13 07:52:04.836269] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:59.196 passed 00:05:59.196 Test: test_fail_path ...[2024-07-13 07:52:04.836480] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:59.196 [2024-07-13 07:52:04.836777] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:59.196 [2024-07-13 07:52:04.836887] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:59.196 [2024-07-13 07:52:04.836973] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:59.196 [2024-07-13 07:52:04.837030] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:59.196 [2024-07-13 07:52:04.837128] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:59.196 passed 00:05:59.196 Test: test_nvme_ns_cmp ...passed 00:05:59.196 Test: test_ana_transition ...passed 00:05:59.196 Test: test_set_preferred_path ...passed 00:05:59.196 Test: test_find_next_io_path ...passed 00:05:59.196 Test: test_find_io_path_min_qd ...passed 00:05:59.196 Test: test_disable_auto_failback ...passed 00:05:59.196 Test: test_set_multipath_policy ...passed 00:05:59.196 Test: test_uuid_generation ...[2024-07-13 07:52:04.838253] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:59.196 passed 00:05:59.196 Test: test_retry_io_to_same_path ...passed 00:05:59.196 Test: test_race_between_reset_and_disconnected ...passed 00:05:59.196 Test: test_ctrlr_op_rpc ...passed 00:05:59.196 Test: test_bdev_ctrlr_op_rpc ...passed 00:05:59.196 Test: test_disable_enable_ctrlr ...passed 00:05:59.196 Test: test_delete_ctrlr_done ...passed 00:05:59.196 Test: test_ns_remove_during_reset ...passed 00:05:59.196 00:05:59.196 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.196 suites 1 1 n/a 0 0 00:05:59.196 tests 48 48 48 0 0 00:05:59.196 asserts 3553 3553 3553 0 n/a 00:05:59.196 00:05:59.196 Elapsed time = 0.020 seconds 00:05:59.196 [2024-07-13 07:52:04.840512] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:59.196 [2024-07-13 07:52:04.840617] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:59.196 07:52:04 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:05:59.196 Test Options 00:05:59.196 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:05:59.196 00:05:59.196 00:05:59.196 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.196 http://cunit.sourceforge.net/ 00:05:59.196 00:05:59.196 00:05:59.196 Suite: raid 00:05:59.196 Test: test_create_raid ...passed 00:05:59.196 Test: test_create_raid_superblock ...passed 00:05:59.196 Test: test_delete_raid ...passed 00:05:59.196 Test: test_create_raid_invalid_args ...[2024-07-13 07:52:04.874006] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:05:59.196 [2024-07-13 07:52:04.874363] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:05:59.196 [2024-07-13 07:52:04.874661] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:05:59.196 [2024-07-13 07:52:04.874837] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:59.196 [2024-07-13 07:52:04.875831] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:59.196 passed 00:05:59.196 Test: test_delete_raid_invalid_args ...passed 00:05:59.196 Test: test_io_channel ...passed 00:05:59.196 Test: test_reset_io ...passed 00:05:59.196 Test: test_write_io ...passed 00:05:59.196 Test: test_read_io ...passed 00:06:00.131 Test: test_unmap_io ...passed 00:06:00.131 Test: test_io_failure ...passed 00:06:00.131 Test: test_multi_raid_no_io ...[2024-07-13 07:52:05.898047] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:06:00.131 passed 00:06:00.131 Test: test_multi_raid_with_io ...passed 00:06:00.131 Test: test_io_type_supported ...passed 00:06:00.131 Test: test_raid_json_dump_info ...passed 00:06:00.131 Test: test_context_size ...passed 00:06:00.131 Test: test_raid_level_conversions ...passed 00:06:00.131 Test: test_raid_process ...passed 00:06:00.131 Test: test_raid_io_split ...passed 00:06:00.131 00:06:00.131 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.131 suites 1 1 n/a 0 0 00:06:00.131 tests 19 19 19 0 0 00:06:00.131 asserts 177879 177879 177879 0 n/a 00:06:00.131 00:06:00.131 Elapsed time = 1.030 seconds 00:06:00.131 07:52:05 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:06:00.391 00:06:00.391 00:06:00.391 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.391 http://cunit.sourceforge.net/ 00:06:00.391 00:06:00.391 00:06:00.391 Suite: raid_sb 00:06:00.391 Test: test_raid_bdev_write_superblock ...passed 00:06:00.391 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:00.391 Test: test_raid_bdev_parse_superblock ...passed 00:06:00.391 00:06:00.391 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.391 suites 1 1 n/a 0 0 00:06:00.391 tests 3 3 3 0 0 00:06:00.391 asserts 32 32 32 0 n/a 00:06:00.391 00:06:00.391 Elapsed time = 0.000 seconds 00:06:00.391 [2024-07-13 07:52:05.944429] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:00.391 07:52:05 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:06:00.391 00:06:00.391 00:06:00.391 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.391 http://cunit.sourceforge.net/ 00:06:00.391 00:06:00.391 00:06:00.391 Suite: concat 00:06:00.391 Test: test_concat_start ...passed 00:06:00.391 Test: test_concat_rw ...passed 00:06:00.391 Test: test_concat_null_payload ...passed 00:06:00.391 00:06:00.391 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.391 suites 1 1 n/a 0 0 00:06:00.391 tests 3 3 3 0 0 00:06:00.391 asserts 8097 8097 8097 0 n/a 00:06:00.391 00:06:00.391 Elapsed time = 0.010 seconds 00:06:00.391 07:52:05 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:06:00.391 00:06:00.391 00:06:00.391 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.391 http://cunit.sourceforge.net/ 00:06:00.391 00:06:00.391 00:06:00.391 Suite: raid1 00:06:00.391 Test: test_raid1_start ...passed 00:06:00.391 Test: test_raid1_read_balancing ...passed 00:06:00.391 00:06:00.391 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.391 suites 1 1 n/a 0 0 00:06:00.391 tests 2 2 2 0 0 00:06:00.391 asserts 2856 2856 2856 0 n/a 00:06:00.391 00:06:00.391 Elapsed time = 0.000 seconds 00:06:00.391 07:52:06 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:06:00.391 00:06:00.391 00:06:00.391 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.391 http://cunit.sourceforge.net/ 00:06:00.391 00:06:00.391 00:06:00.391 Suite: zone 00:06:00.391 Test: test_zone_get_operation ...passed 00:06:00.391 Test: test_bdev_zone_get_info ...passed 00:06:00.391 Test: test_bdev_zone_management ...passed 00:06:00.391 Test: test_bdev_zone_append ...passed 00:06:00.391 Test: test_bdev_zone_append_with_md ...passed 00:06:00.391 Test: test_bdev_zone_appendv ...passed 00:06:00.391 Test: test_bdev_zone_appendv_with_md ...passed 00:06:00.391 Test: test_bdev_io_get_append_location ...passed 00:06:00.391 00:06:00.391 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.391 suites 1 1 n/a 0 0 00:06:00.391 tests 8 8 8 0 0 00:06:00.391 asserts 94 94 94 0 n/a 00:06:00.391 00:06:00.391 Elapsed time = 0.000 seconds 00:06:00.391 07:52:06 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:06:00.391 00:06:00.391 00:06:00.391 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.391 http://cunit.sourceforge.net/ 00:06:00.391 00:06:00.391 00:06:00.391 Suite: gpt_parse 00:06:00.391 Test: test_parse_mbr_and_primary ...passed 00:06:00.391 Test: test_parse_secondary ...passed 00:06:00.391 Test: test_check_mbr ...passed 00:06:00.391 Test: test_read_header ...passed 00:06:00.391 Test: test_read_partitions ...passed 00:06:00.391 00:06:00.391 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.391 suites 1 1 n/a 0 0 00:06:00.391 tests 5 5 5 0 0 00:06:00.391 asserts 33 33 33 0 n/a 00:06:00.391 00:06:00.391 Elapsed time = 0.000 seconds 00:06:00.391 [2024-07-13 07:52:06.066155] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:00.391 [2024-07-13 07:52:06.066474] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:00.391 [2024-07-13 07:52:06.066534] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:00.391 [2024-07-13 07:52:06.066613] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:00.391 [2024-07-13 07:52:06.066650] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:00.391 [2024-07-13 07:52:06.066726] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:00.391 [2024-07-13 07:52:06.067020] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:00.392 [2024-07-13 07:52:06.067069] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:00.392 [2024-07-13 07:52:06.067099] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:00.392 [2024-07-13 07:52:06.067130] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:00.392 [2024-07-13 07:52:06.067398] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:00.392 [2024-07-13 07:52:06.067433] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:00.392 [2024-07-13 07:52:06.067486] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:06:00.392 [2024-07-13 07:52:06.067585] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:06:00.392 [2024-07-13 07:52:06.067665] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:06:00.392 [2024-07-13 07:52:06.067701] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:06:00.392 [2024-07-13 07:52:06.067728] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:06:00.392 [2024-07-13 07:52:06.067762] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:06:00.392 [2024-07-13 07:52:06.067799] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:06:00.392 [2024-07-13 07:52:06.067845] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:06:00.392 [2024-07-13 07:52:06.067878] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:06:00.392 [2024-07-13 07:52:06.067902] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:06:00.392 [2024-07-13 07:52:06.068062] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:06:00.392 07:52:06 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:06:00.392 00:06:00.392 00:06:00.392 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.392 http://cunit.sourceforge.net/ 00:06:00.392 00:06:00.392 00:06:00.392 Suite: bdev_part 00:06:00.392 Test: part_test ...[2024-07-13 07:52:06.098429] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:06:00.392 passed 00:06:00.392 Test: part_free_test ...passed 00:06:00.392 Test: part_get_io_channel_test ...passed 00:06:00.392 Test: part_construct_ext ...passed 00:06:00.392 00:06:00.392 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.392 suites 1 1 n/a 0 0 00:06:00.392 tests 4 4 4 0 0 00:06:00.392 asserts 48 48 48 0 n/a 00:06:00.392 00:06:00.392 Elapsed time = 0.060 seconds 00:06:00.392 07:52:06 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:06:00.392 00:06:00.392 00:06:00.392 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.392 http://cunit.sourceforge.net/ 00:06:00.392 00:06:00.392 00:06:00.392 Suite: scsi_nvme_suite 00:06:00.392 Test: scsi_nvme_translate_test ...passed 00:06:00.392 00:06:00.392 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.392 suites 1 1 n/a 0 0 00:06:00.392 tests 1 1 1 0 0 00:06:00.392 asserts 104 104 104 0 n/a 00:06:00.392 00:06:00.392 Elapsed time = 0.000 seconds 00:06:00.392 07:52:06 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:06:00.652 00:06:00.652 00:06:00.652 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.652 http://cunit.sourceforge.net/ 00:06:00.652 00:06:00.652 00:06:00.652 Suite: lvol 00:06:00.652 Test: ut_lvs_init ...passed 00:06:00.652 Test: ut_lvol_init ...passed 00:06:00.652 Test: ut_lvol_snapshot ...[2024-07-13 07:52:06.209953] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:06:00.652 [2024-07-13 07:52:06.210348] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:06:00.652 passed 00:06:00.652 Test: ut_lvol_clone ...passed 00:06:00.652 Test: ut_lvs_destroy ...passed 00:06:00.652 Test: ut_lvs_unload ...passed 00:06:00.652 Test: ut_lvol_resize ...passed 00:06:00.652 Test: ut_lvol_set_read_only ...passed 00:06:00.652 Test: ut_lvol_hotremove ...passed 00:06:00.652 Test: ut_vbdev_lvol_get_io_channel ...passed 00:06:00.652 Test: ut_vbdev_lvol_io_type_supported ...passed 00:06:00.652 Test: ut_lvol_read_write ...passed 00:06:00.652 Test: ut_vbdev_lvol_submit_request ...passed 00:06:00.652 Test: ut_lvol_examine_config ...passed 00:06:00.652 Test: ut_lvol_examine_disk ...[2024-07-13 07:52:06.211157] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:06:00.652 [2024-07-13 07:52:06.211643] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:06:00.652 passed 00:06:00.652 Test: ut_lvol_rename ...passed 00:06:00.652 Test: ut_bdev_finish ...passed 00:06:00.652 Test: ut_lvs_rename ...[2024-07-13 07:52:06.212102] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:06:00.652 [2024-07-13 07:52:06.212185] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:06:00.652 passed 00:06:00.652 Test: ut_lvol_seek ...passed 00:06:00.652 Test: ut_esnap_dev_create ...[2024-07-13 07:52:06.212602] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:06:00.652 passed 00:06:00.652 Test: ut_lvol_esnap_clone_bad_args ...passed 00:06:00.652 00:06:00.652 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.652 suites 1 1 n/a 0 0 00:06:00.652 tests 21 21 21 0 0 00:06:00.652 asserts 712 712 712 0 n/a 00:06:00.652 00:06:00.652 Elapsed time = 0.010 seconds 00:06:00.652 [2024-07-13 07:52:06.212667] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:06:00.652 [2024-07-13 07:52:06.212696] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:06:00.652 [2024-07-13 07:52:06.212746] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:06:00.652 [2024-07-13 07:52:06.212873] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:06:00.652 [2024-07-13 07:52:06.212913] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:06:00.652 07:52:06 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:06:00.652 00:06:00.652 00:06:00.652 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.652 http://cunit.sourceforge.net/ 00:06:00.652 00:06:00.652 00:06:00.652 Suite: zone_block 00:06:00.652 Test: test_zone_block_create ...passed 00:06:00.652 Test: test_zone_block_create_invalid ...[2024-07-13 07:52:06.267098] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:06:00.652 [2024-07-13 07:52:06.267380] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-13 07:52:06.267525] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:06:00.652 [2024-07-13 07:52:06.267598] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File existspassed 00:06:00.652 Test: test_get_zone_info ...[2024-07-13 07:52:06.267654] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:06:00.652 [2024-07-13 07:52:06.267695] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-13 07:52:06.267756] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:06:00.652 [2024-07-13 07:52:06.267806] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:06:00.652 Test: test_supported_io_types ...passed 00:06:00.652 Test: test_reset_zone ...passed 00:06:00.652 Test: test_open_zone ...passed 00:06:00.652 Test: test_zone_write ...[2024-07-13 07:52:06.268141] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 [2024-07-13 07:52:06.268199] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 [2024-07-13 07:52:06.268241] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 [2024-07-13 07:52:06.268750] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 [2024-07-13 07:52:06.268786] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 [2024-07-13 07:52:06.269066] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 [2024-07-13 07:52:06.269608] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 [2024-07-13 07:52:06.269656] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 [2024-07-13 07:52:06.269957] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:00.652 [2024-07-13 07:52:06.269995] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 [2024-07-13 07:52:06.270052] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:00.652 [2024-07-13 07:52:06.270098] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 [2024-07-13 07:52:06.276140] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:06:00.652 [2024-07-13 07:52:06.276194] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 [2024-07-13 07:52:06.276265] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:06:00.652 [2024-07-13 07:52:06.276296] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 [2024-07-13 07:52:06.282318] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:00.652 [2024-07-13 07:52:06.282387] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 passed 00:06:00.652 Test: test_zone_read ...[2024-07-13 07:52:06.282860] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:06:00.652 [2024-07-13 07:52:06.282901] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 passed 00:06:00.652 Test: test_close_zone ...passed 00:06:00.652 Test: test_finish_zone ...passed 00:06:00.652 Test: test_append_zone ...[2024-07-13 07:52:06.282954] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:06:00.652 [2024-07-13 07:52:06.282987] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 [2024-07-13 07:52:06.283310] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:06:00.652 [2024-07-13 07:52:06.283338] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 [2024-07-13 07:52:06.283596] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 [2024-07-13 07:52:06.283657] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 [2024-07-13 07:52:06.283775] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 [2024-07-13 07:52:06.283805] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 [2024-07-13 07:52:06.284176] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 [2024-07-13 07:52:06.284213] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 [2024-07-13 07:52:06.284487] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:00.652 [2024-07-13 07:52:06.284517] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 [2024-07-13 07:52:06.284561] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:00.652 [2024-07-13 07:52:06.284586] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 passed 00:06:00.652 00:06:00.652 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.652 suites 1 1 n/a 0 0 00:06:00.652 tests 11 11 11 0 0 00:06:00.652 asserts 3437 3437 3437 0 n/a 00:06:00.652 00:06:00.652 Elapsed time = 0.030 seconds 00:06:00.652 [2024-07-13 07:52:06.296683] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:00.652 [2024-07-13 07:52:06.296742] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:00.652 07:52:06 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:06:00.652 00:06:00.652 00:06:00.652 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.652 http://cunit.sourceforge.net/ 00:06:00.652 00:06:00.652 00:06:00.652 Suite: bdev 00:06:00.652 Test: basic ...[2024-07-13 07:52:06.403176] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x51e721): Operation not permitted (rc=-1) 00:06:00.652 [2024-07-13 07:52:06.403638] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x51e6e0): Operation not permitted (rc=-1) 00:06:00.652 [2024-07-13 07:52:06.403764] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x51e721): Operation not permitted (rc=-1) 00:06:00.652 passed 00:06:00.912 Test: unregister_and_close ...passed 00:06:00.912 Test: unregister_and_close_different_threads ...passed 00:06:00.912 Test: basic_qos ...passed 00:06:00.912 Test: put_channel_during_reset ...passed 00:06:01.170 Test: aborted_reset ...passed 00:06:01.170 Test: aborted_reset_no_outstanding_io ...passed 00:06:01.170 Test: io_during_reset ...passed 00:06:01.170 Test: reset_completions ...passed 00:06:01.428 Test: io_during_qos_queue ...passed 00:06:01.428 Test: io_during_qos_reset ...passed 00:06:01.428 Test: enomem ...passed 00:06:01.428 Test: enomem_multi_bdev ...passed 00:06:01.428 Test: enomem_multi_bdev_unregister ...passed 00:06:01.687 Test: enomem_multi_io_target ...passed 00:06:01.687 Test: qos_dynamic_enable ...passed 00:06:01.687 Test: bdev_histograms_mt ...passed 00:06:01.687 Test: bdev_set_io_timeout_mt ...passed 00:06:01.687 Test: lock_lba_range_then_submit_io ...[2024-07-13 07:52:07.397898] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:06:01.687 [2024-07-13 07:52:07.417643] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x51e6a0 already registered (old:0x6130000003c0 new:0x613000000c80) 00:06:01.687 passed 00:06:01.687 Test: unregister_during_reset ...passed 00:06:01.946 Test: event_notify_and_close ...passed 00:06:01.946 Test: unregister_and_qos_poller ...passed 00:06:01.946 Suite: bdev_wrong_thread 00:06:01.946 Test: spdk_bdev_register_wt ...passed 00:06:01.946 Test: spdk_bdev_examine_wt ...passed 00:06:01.946 00:06:01.946 [2024-07-13 07:52:07.575818] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000001480 (0x618000001480) 00:06:01.946 [2024-07-13 07:52:07.576016] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000001480 (0x618000001480) 00:06:01.946 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.946 suites 2 2 n/a 0 0 00:06:01.946 tests 24 24 24 0 0 00:06:01.946 asserts 621 621 621 0 n/a 00:06:01.946 00:06:01.946 Elapsed time = 1.200 seconds 00:06:01.946 ************************************ 00:06:01.946 END TEST unittest_bdev 00:06:01.946 ************************************ 00:06:01.946 00:06:01.946 real 0m4.781s 00:06:01.946 user 0m1.875s 00:06:01.946 sys 0m2.900s 00:06:01.946 07:52:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.946 07:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:01.946 07:52:07 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:01.946 07:52:07 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:01.946 07:52:07 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:01.946 07:52:07 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:01.946 07:52:07 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:06:01.946 07:52:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:01.946 07:52:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.946 07:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:01.946 ************************************ 00:06:01.946 START TEST unittest_blob_blobfs 00:06:01.946 ************************************ 00:06:01.946 07:52:07 -- common/autotest_common.sh@1104 -- # unittest_blob 00:06:01.946 07:52:07 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:06:01.947 07:52:07 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:06:01.947 00:06:01.947 00:06:01.947 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.947 http://cunit.sourceforge.net/ 00:06:01.947 00:06:01.947 00:06:01.947 Suite: blob_nocopy_noextent 00:06:01.947 Test: blob_init ...[2024-07-13 07:52:07.693099] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:01.947 passed 00:06:01.947 Test: blob_thin_provision ...passed 00:06:01.947 Test: blob_read_only ...passed 00:06:01.947 Test: bs_load ...[2024-07-13 07:52:07.747793] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:01.947 passed 00:06:02.205 Test: bs_load_custom_cluster_size ...passed 00:06:02.205 Test: bs_load_after_failed_grow ...passed 00:06:02.205 Test: bs_cluster_sz ...[2024-07-13 07:52:07.770208] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:02.205 [2024-07-13 07:52:07.770574] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:02.205 [2024-07-13 07:52:07.770709] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:02.205 passed 00:06:02.205 Test: bs_resize_md ...passed 00:06:02.205 Test: bs_destroy ...passed 00:06:02.205 Test: bs_type ...passed 00:06:02.205 Test: bs_super_block ...passed 00:06:02.205 Test: bs_test_recover_cluster_count ...passed 00:06:02.205 Test: bs_grow_live ...passed 00:06:02.205 Test: bs_grow_live_no_space ...passed 00:06:02.205 Test: bs_test_grow ...passed 00:06:02.205 Test: blob_serialize_test ...passed 00:06:02.205 Test: super_block_crc ...passed 00:06:02.205 Test: blob_thin_prov_write_count_io ...passed 00:06:02.205 Test: bs_load_iter_test ...passed 00:06:02.205 Test: blob_relations ...[2024-07-13 07:52:07.895103] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:02.205 [2024-07-13 07:52:07.895197] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:02.205 [2024-07-13 07:52:07.895959] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:02.205 [2024-07-13 07:52:07.896013] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:02.205 passed 00:06:02.205 Test: blob_relations2 ...[2024-07-13 07:52:07.906325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:02.205 [2024-07-13 07:52:07.906394] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:02.205 [2024-07-13 07:52:07.906424] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:02.205 [2024-07-13 07:52:07.906442] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:02.205 [2024-07-13 07:52:07.907278] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:02.205 [2024-07-13 07:52:07.907317] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:02.205 [2024-07-13 07:52:07.907706] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:02.205 [2024-07-13 07:52:07.907754] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:02.205 passed 00:06:02.205 Test: blob_relations3 ...passed 00:06:02.205 Test: blobstore_clean_power_failure ...passed 00:06:02.464 Test: blob_delete_snapshot_power_failure ...[2024-07-13 07:52:08.019038] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:02.464 [2024-07-13 07:52:08.027693] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:02.464 [2024-07-13 07:52:08.027770] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:02.464 [2024-07-13 07:52:08.027813] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:02.464 [2024-07-13 07:52:08.036601] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:02.464 [2024-07-13 07:52:08.036666] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:02.464 [2024-07-13 07:52:08.036718] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:02.464 [2024-07-13 07:52:08.036745] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:02.464 [2024-07-13 07:52:08.045259] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:02.464 [2024-07-13 07:52:08.045355] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:02.464 [2024-07-13 07:52:08.057985] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:02.464 [2024-07-13 07:52:08.058153] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:02.464 [2024-07-13 07:52:08.067607] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:02.464 [2024-07-13 07:52:08.067684] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:02.464 passed 00:06:02.464 Test: blob_create_snapshot_power_failure ...[2024-07-13 07:52:08.093778] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:02.464 [2024-07-13 07:52:08.110698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:02.464 [2024-07-13 07:52:08.119598] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:02.464 passed 00:06:02.464 Test: blob_io_unit ...passed 00:06:02.464 Test: blob_io_unit_compatibility ...passed 00:06:02.464 Test: blob_ext_md_pages ...passed 00:06:02.464 Test: blob_esnap_io_4096_4096 ...passed 00:06:02.464 Test: blob_esnap_io_512_512 ...passed 00:06:02.464 Test: blob_esnap_io_4096_512 ...passed 00:06:02.464 Test: blob_esnap_io_512_4096 ...passed 00:06:02.464 Suite: blob_bs_nocopy_noextent 00:06:02.464 Test: blob_open ...passed 00:06:02.723 Test: blob_create ...[2024-07-13 07:52:08.284106] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:02.723 passed 00:06:02.723 Test: blob_create_loop ...passed 00:06:02.723 Test: blob_create_fail ...[2024-07-13 07:52:08.351270] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:02.723 passed 00:06:02.723 Test: blob_create_internal ...passed 00:06:02.723 Test: blob_create_zero_extent ...passed 00:06:02.723 Test: blob_snapshot ...passed 00:06:02.723 Test: blob_clone ...passed 00:06:02.723 Test: blob_inflate ...[2024-07-13 07:52:08.475384] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:02.723 passed 00:06:02.723 Test: blob_delete ...passed 00:06:02.723 Test: blob_resize_test ...[2024-07-13 07:52:08.520204] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:02.723 passed 00:06:02.994 Test: channel_ops ...passed 00:06:02.994 Test: blob_super ...passed 00:06:02.994 Test: blob_rw_verify_iov ...passed 00:06:02.994 Test: blob_unmap ...passed 00:06:02.994 Test: blob_iter ...passed 00:06:02.994 Test: blob_parse_md ...passed 00:06:02.994 Test: bs_load_pending_removal ...passed 00:06:02.994 Test: bs_unload ...[2024-07-13 07:52:08.715979] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:02.994 passed 00:06:02.994 Test: bs_usable_clusters ...passed 00:06:02.994 Test: blob_crc ...[2024-07-13 07:52:08.763096] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:02.994 [2024-07-13 07:52:08.763198] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:02.994 passed 00:06:02.994 Test: blob_flags ...passed 00:06:03.252 Test: bs_version ...passed 00:06:03.252 Test: blob_set_xattrs_test ...[2024-07-13 07:52:08.835434] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:03.252 [2024-07-13 07:52:08.835541] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:03.252 passed 00:06:03.252 Test: blob_thin_prov_alloc ...passed 00:06:03.252 Test: blob_insert_cluster_msg_test ...passed 00:06:03.252 Test: blob_thin_prov_rw ...passed 00:06:03.252 Test: blob_thin_prov_rle ...passed 00:06:03.252 Test: blob_thin_prov_rw_iov ...passed 00:06:03.252 Test: blob_snapshot_rw ...passed 00:06:03.509 Test: blob_snapshot_rw_iov ...passed 00:06:03.509 Test: blob_inflate_rw ...passed 00:06:03.509 Test: blob_snapshot_freeze_io ...passed 00:06:03.767 Test: blob_operation_split_rw ...passed 00:06:03.767 Test: blob_operation_split_rw_iov ...passed 00:06:03.767 Test: blob_simultaneous_operations ...[2024-07-13 07:52:09.450919] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:03.768 [2024-07-13 07:52:09.451014] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.768 [2024-07-13 07:52:09.452602] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:03.768 [2024-07-13 07:52:09.452661] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.768 [2024-07-13 07:52:09.466081] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:03.768 [2024-07-13 07:52:09.466147] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.768 [2024-07-13 07:52:09.466244] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:03.768 [2024-07-13 07:52:09.466279] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.768 passed 00:06:03.768 Test: blob_persist_test ...passed 00:06:03.768 Test: blob_decouple_snapshot ...passed 00:06:04.025 Test: blob_seek_io_unit ...passed 00:06:04.025 Test: blob_nested_freezes ...passed 00:06:04.025 Suite: blob_blob_nocopy_noextent 00:06:04.025 Test: blob_write ...passed 00:06:04.025 Test: blob_read ...passed 00:06:04.025 Test: blob_rw_verify ...passed 00:06:04.025 Test: blob_rw_verify_iov_nomem ...passed 00:06:04.025 Test: blob_rw_iov_read_only ...passed 00:06:04.025 Test: blob_xattr ...passed 00:06:04.025 Test: blob_dirty_shutdown ...passed 00:06:04.025 Test: blob_is_degraded ...passed 00:06:04.025 Suite: blob_esnap_bs_nocopy_noextent 00:06:04.283 Test: blob_esnap_create ...passed 00:06:04.283 Test: blob_esnap_thread_add_remove ...passed 00:06:04.283 Test: blob_esnap_clone_snapshot ...passed 00:06:04.283 Test: blob_esnap_clone_inflate ...passed 00:06:04.283 Test: blob_esnap_clone_decouple ...passed 00:06:04.283 Test: blob_esnap_clone_reload ...passed 00:06:04.283 Test: blob_esnap_hotplug ...passed 00:06:04.283 Suite: blob_nocopy_extent 00:06:04.283 Test: blob_init ...[2024-07-13 07:52:10.010177] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:04.283 passed 00:06:04.283 Test: blob_thin_provision ...passed 00:06:04.283 Test: blob_read_only ...passed 00:06:04.283 Test: bs_load ...[2024-07-13 07:52:10.047358] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:04.283 passed 00:06:04.283 Test: bs_load_custom_cluster_size ...passed 00:06:04.283 Test: bs_load_after_failed_grow ...passed 00:06:04.283 Test: bs_cluster_sz ...[2024-07-13 07:52:10.065642] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:04.283 [2024-07-13 07:52:10.065863] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:04.283 [2024-07-13 07:52:10.065905] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:04.283 passed 00:06:04.283 Test: bs_resize_md ...passed 00:06:04.283 Test: bs_destroy ...passed 00:06:04.541 Test: bs_type ...passed 00:06:04.541 Test: bs_super_block ...passed 00:06:04.541 Test: bs_test_recover_cluster_count ...passed 00:06:04.541 Test: bs_grow_live ...passed 00:06:04.541 Test: bs_grow_live_no_space ...passed 00:06:04.541 Test: bs_test_grow ...passed 00:06:04.541 Test: blob_serialize_test ...passed 00:06:04.541 Test: super_block_crc ...passed 00:06:04.541 Test: blob_thin_prov_write_count_io ...passed 00:06:04.541 Test: bs_load_iter_test ...passed 00:06:04.541 Test: blob_relations ...[2024-07-13 07:52:10.171691] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:04.541 [2024-07-13 07:52:10.171783] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:04.541 [2024-07-13 07:52:10.172784] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:04.541 [2024-07-13 07:52:10.172854] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:04.541 passed 00:06:04.541 Test: blob_relations2 ...[2024-07-13 07:52:10.183338] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:04.541 [2024-07-13 07:52:10.183409] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:04.541 [2024-07-13 07:52:10.183433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:04.541 [2024-07-13 07:52:10.183476] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:04.541 [2024-07-13 07:52:10.184680] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:04.541 [2024-07-13 07:52:10.184725] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:04.541 [2024-07-13 07:52:10.185093] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:04.541 [2024-07-13 07:52:10.185129] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:04.541 passed 00:06:04.541 Test: blob_relations3 ...passed 00:06:04.541 Test: blobstore_clean_power_failure ...passed 00:06:04.541 Test: blob_delete_snapshot_power_failure ...[2024-07-13 07:52:10.294527] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:04.541 [2024-07-13 07:52:10.303466] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:04.541 [2024-07-13 07:52:10.312478] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:04.541 [2024-07-13 07:52:10.312549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:04.541 [2024-07-13 07:52:10.312579] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:04.541 [2024-07-13 07:52:10.321536] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:04.541 [2024-07-13 07:52:10.321609] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:04.541 [2024-07-13 07:52:10.321644] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:04.541 [2024-07-13 07:52:10.321668] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:04.541 [2024-07-13 07:52:10.330231] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:04.541 [2024-07-13 07:52:10.330293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:04.541 [2024-07-13 07:52:10.330332] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:04.541 [2024-07-13 07:52:10.330369] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:04.541 [2024-07-13 07:52:10.338669] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:04.541 [2024-07-13 07:52:10.338745] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:04.541 [2024-07-13 07:52:10.346808] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:04.541 [2024-07-13 07:52:10.346910] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:04.799 [2024-07-13 07:52:10.355360] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:04.799 [2024-07-13 07:52:10.355442] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:04.799 passed 00:06:04.799 Test: blob_create_snapshot_power_failure ...[2024-07-13 07:52:10.379572] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:04.799 [2024-07-13 07:52:10.387281] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:04.799 [2024-07-13 07:52:10.402972] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:04.799 [2024-07-13 07:52:10.411141] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:04.799 passed 00:06:04.799 Test: blob_io_unit ...passed 00:06:04.799 Test: blob_io_unit_compatibility ...passed 00:06:04.799 Test: blob_ext_md_pages ...passed 00:06:04.799 Test: blob_esnap_io_4096_4096 ...passed 00:06:04.799 Test: blob_esnap_io_512_512 ...passed 00:06:04.799 Test: blob_esnap_io_4096_512 ...passed 00:06:04.799 Test: blob_esnap_io_512_4096 ...passed 00:06:04.799 Suite: blob_bs_nocopy_extent 00:06:04.799 Test: blob_open ...passed 00:06:04.799 Test: blob_create ...[2024-07-13 07:52:10.593543] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:04.799 passed 00:06:05.057 Test: blob_create_loop ...passed 00:06:05.057 Test: blob_create_fail ...[2024-07-13 07:52:10.668542] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:05.057 passed 00:06:05.057 Test: blob_create_internal ...passed 00:06:05.057 Test: blob_create_zero_extent ...passed 00:06:05.057 Test: blob_snapshot ...passed 00:06:05.057 Test: blob_clone ...passed 00:06:05.057 Test: blob_inflate ...[2024-07-13 07:52:10.799378] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:05.057 passed 00:06:05.057 Test: blob_delete ...passed 00:06:05.057 Test: blob_resize_test ...[2024-07-13 07:52:10.846951] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:05.057 passed 00:06:05.314 Test: channel_ops ...passed 00:06:05.314 Test: blob_super ...passed 00:06:05.314 Test: blob_rw_verify_iov ...passed 00:06:05.314 Test: blob_unmap ...passed 00:06:05.314 Test: blob_iter ...passed 00:06:05.314 Test: blob_parse_md ...passed 00:06:05.314 Test: bs_load_pending_removal ...passed 00:06:05.314 Test: bs_unload ...[2024-07-13 07:52:11.042164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:05.314 passed 00:06:05.314 Test: bs_usable_clusters ...passed 00:06:05.314 Test: blob_crc ...[2024-07-13 07:52:11.089474] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:05.314 [2024-07-13 07:52:11.089581] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:05.314 passed 00:06:05.314 Test: blob_flags ...passed 00:06:05.571 Test: bs_version ...passed 00:06:05.571 Test: blob_set_xattrs_test ...[2024-07-13 07:52:11.161147] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:05.571 [2024-07-13 07:52:11.161235] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:05.571 passed 00:06:05.571 Test: blob_thin_prov_alloc ...passed 00:06:05.571 Test: blob_insert_cluster_msg_test ...passed 00:06:05.571 Test: blob_thin_prov_rw ...passed 00:06:05.571 Test: blob_thin_prov_rle ...passed 00:06:05.571 Test: blob_thin_prov_rw_iov ...passed 00:06:05.571 Test: blob_snapshot_rw ...passed 00:06:05.828 Test: blob_snapshot_rw_iov ...passed 00:06:05.828 Test: blob_inflate_rw ...passed 00:06:05.828 Test: blob_snapshot_freeze_io ...passed 00:06:06.086 Test: blob_operation_split_rw ...passed 00:06:06.086 Test: blob_operation_split_rw_iov ...passed 00:06:06.086 Test: blob_simultaneous_operations ...[2024-07-13 07:52:11.831349] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:06.086 [2024-07-13 07:52:11.831430] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.086 [2024-07-13 07:52:11.832687] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:06.086 [2024-07-13 07:52:11.832726] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.086 [2024-07-13 07:52:11.844333] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:06.086 [2024-07-13 07:52:11.844399] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.086 [2024-07-13 07:52:11.844620] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:06.086 [2024-07-13 07:52:11.844659] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.086 passed 00:06:06.086 Test: blob_persist_test ...passed 00:06:06.344 Test: blob_decouple_snapshot ...passed 00:06:06.344 Test: blob_seek_io_unit ...passed 00:06:06.344 Test: blob_nested_freezes ...passed 00:06:06.344 Suite: blob_blob_nocopy_extent 00:06:06.344 Test: blob_write ...passed 00:06:06.344 Test: blob_read ...passed 00:06:06.344 Test: blob_rw_verify ...passed 00:06:06.344 Test: blob_rw_verify_iov_nomem ...passed 00:06:06.344 Test: blob_rw_iov_read_only ...passed 00:06:06.344 Test: blob_xattr ...passed 00:06:06.603 Test: blob_dirty_shutdown ...passed 00:06:06.603 Test: blob_is_degraded ...passed 00:06:06.603 Suite: blob_esnap_bs_nocopy_extent 00:06:06.603 Test: blob_esnap_create ...passed 00:06:06.603 Test: blob_esnap_thread_add_remove ...passed 00:06:06.603 Test: blob_esnap_clone_snapshot ...passed 00:06:06.603 Test: blob_esnap_clone_inflate ...passed 00:06:06.603 Test: blob_esnap_clone_decouple ...passed 00:06:06.603 Test: blob_esnap_clone_reload ...passed 00:06:06.603 Test: blob_esnap_hotplug ...passed 00:06:06.603 Suite: blob_copy_noextent 00:06:06.603 Test: blob_init ...[2024-07-13 07:52:12.376118] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:06.603 passed 00:06:06.603 Test: blob_thin_provision ...passed 00:06:06.603 Test: blob_read_only ...passed 00:06:06.603 Test: bs_load ...[2024-07-13 07:52:12.404874] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:06.603 passed 00:06:06.603 Test: bs_load_custom_cluster_size ...passed 00:06:06.860 Test: bs_load_after_failed_grow ...passed 00:06:06.860 Test: bs_cluster_sz ...[2024-07-13 07:52:12.419966] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:06.860 [2024-07-13 07:52:12.420077] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:06.860 [2024-07-13 07:52:12.420108] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:06.860 passed 00:06:06.860 Test: bs_resize_md ...passed 00:06:06.860 Test: bs_destroy ...passed 00:06:06.860 Test: bs_type ...passed 00:06:06.860 Test: bs_super_block ...passed 00:06:06.860 Test: bs_test_recover_cluster_count ...passed 00:06:06.860 Test: bs_grow_live ...passed 00:06:06.860 Test: bs_grow_live_no_space ...passed 00:06:06.860 Test: bs_test_grow ...passed 00:06:06.860 Test: blob_serialize_test ...passed 00:06:06.860 Test: super_block_crc ...passed 00:06:06.860 Test: blob_thin_prov_write_count_io ...passed 00:06:06.860 Test: bs_load_iter_test ...passed 00:06:06.860 Test: blob_relations ...[2024-07-13 07:52:12.517668] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:06.860 [2024-07-13 07:52:12.517753] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.860 [2024-07-13 07:52:12.518082] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:06.860 [2024-07-13 07:52:12.518102] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.860 passed 00:06:06.860 Test: blob_relations2 ...[2024-07-13 07:52:12.526981] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:06.860 [2024-07-13 07:52:12.527054] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.860 [2024-07-13 07:52:12.527075] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:06.860 [2024-07-13 07:52:12.527089] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.860 [2024-07-13 07:52:12.527612] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:06.860 [2024-07-13 07:52:12.527644] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.860 [2024-07-13 07:52:12.527826] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:06.860 [2024-07-13 07:52:12.527854] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.860 passed 00:06:06.860 Test: blob_relations3 ...passed 00:06:06.860 Test: blobstore_clean_power_failure ...passed 00:06:06.860 Test: blob_delete_snapshot_power_failure ...[2024-07-13 07:52:12.631554] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:06.860 [2024-07-13 07:52:12.639344] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:06.860 [2024-07-13 07:52:12.639406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:06.860 [2024-07-13 07:52:12.639429] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.860 [2024-07-13 07:52:12.647195] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:06.860 [2024-07-13 07:52:12.647258] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:06.860 [2024-07-13 07:52:12.647293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:06.860 [2024-07-13 07:52:12.647310] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.860 [2024-07-13 07:52:12.655104] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:06.860 [2024-07-13 07:52:12.655184] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.860 [2024-07-13 07:52:12.663163] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:06.860 [2024-07-13 07:52:12.663241] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.860 [2024-07-13 07:52:12.670950] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:06.860 [2024-07-13 07:52:12.671027] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:07.118 passed 00:06:07.118 Test: blob_create_snapshot_power_failure ...[2024-07-13 07:52:12.694110] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:07.118 [2024-07-13 07:52:12.709007] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:07.118 [2024-07-13 07:52:12.722509] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:07.118 passed 00:06:07.118 Test: blob_io_unit ...passed 00:06:07.118 Test: blob_io_unit_compatibility ...passed 00:06:07.118 Test: blob_ext_md_pages ...passed 00:06:07.118 Test: blob_esnap_io_4096_4096 ...passed 00:06:07.118 Test: blob_esnap_io_512_512 ...passed 00:06:07.118 Test: blob_esnap_io_4096_512 ...passed 00:06:07.118 Test: blob_esnap_io_512_4096 ...passed 00:06:07.118 Suite: blob_bs_copy_noextent 00:06:07.118 Test: blob_open ...passed 00:06:07.118 Test: blob_create ...[2024-07-13 07:52:12.903758] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:07.118 passed 00:06:07.376 Test: blob_create_loop ...passed 00:06:07.376 Test: blob_create_fail ...[2024-07-13 07:52:12.977245] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:07.376 passed 00:06:07.376 Test: blob_create_internal ...passed 00:06:07.376 Test: blob_create_zero_extent ...passed 00:06:07.376 Test: blob_snapshot ...passed 00:06:07.376 Test: blob_clone ...passed 00:06:07.376 Test: blob_inflate ...[2024-07-13 07:52:13.107359] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:07.376 passed 00:06:07.376 Test: blob_delete ...passed 00:06:07.376 Test: blob_resize_test ...[2024-07-13 07:52:13.152857] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:07.376 passed 00:06:07.376 Test: channel_ops ...passed 00:06:07.633 Test: blob_super ...passed 00:06:07.633 Test: blob_rw_verify_iov ...passed 00:06:07.633 Test: blob_unmap ...passed 00:06:07.633 Test: blob_iter ...passed 00:06:07.633 Test: blob_parse_md ...passed 00:06:07.633 Test: bs_load_pending_removal ...passed 00:06:07.633 Test: bs_unload ...[2024-07-13 07:52:13.348271] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:07.633 passed 00:06:07.633 Test: bs_usable_clusters ...passed 00:06:07.633 Test: blob_crc ...[2024-07-13 07:52:13.392834] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:07.633 [2024-07-13 07:52:13.392937] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:07.633 passed 00:06:07.633 Test: blob_flags ...passed 00:06:07.891 Test: bs_version ...passed 00:06:07.891 Test: blob_set_xattrs_test ...[2024-07-13 07:52:13.464613] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:07.891 [2024-07-13 07:52:13.464707] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:07.891 passed 00:06:07.891 Test: blob_thin_prov_alloc ...passed 00:06:07.891 Test: blob_insert_cluster_msg_test ...passed 00:06:07.891 Test: blob_thin_prov_rw ...passed 00:06:07.891 Test: blob_thin_prov_rle ...passed 00:06:07.891 Test: blob_thin_prov_rw_iov ...passed 00:06:07.891 Test: blob_snapshot_rw ...passed 00:06:07.891 Test: blob_snapshot_rw_iov ...passed 00:06:08.149 Test: blob_inflate_rw ...passed 00:06:08.149 Test: blob_snapshot_freeze_io ...passed 00:06:08.149 Test: blob_operation_split_rw ...passed 00:06:08.407 Test: blob_operation_split_rw_iov ...passed 00:06:08.407 Test: blob_simultaneous_operations ...[2024-07-13 07:52:14.093238] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:08.407 [2024-07-13 07:52:14.093318] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:08.407 [2024-07-13 07:52:14.093824] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:08.407 [2024-07-13 07:52:14.093866] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:08.407 [2024-07-13 07:52:14.095727] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:08.407 [2024-07-13 07:52:14.095767] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:08.407 [2024-07-13 07:52:14.095825] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:08.407 [2024-07-13 07:52:14.095840] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:08.407 passed 00:06:08.407 Test: blob_persist_test ...passed 00:06:08.407 Test: blob_decouple_snapshot ...passed 00:06:08.407 Test: blob_seek_io_unit ...passed 00:06:08.407 Test: blob_nested_freezes ...passed 00:06:08.407 Suite: blob_blob_copy_noextent 00:06:08.666 Test: blob_write ...passed 00:06:08.666 Test: blob_read ...passed 00:06:08.666 Test: blob_rw_verify ...passed 00:06:08.666 Test: blob_rw_verify_iov_nomem ...passed 00:06:08.666 Test: blob_rw_iov_read_only ...passed 00:06:08.666 Test: blob_xattr ...passed 00:06:08.666 Test: blob_dirty_shutdown ...passed 00:06:08.666 Test: blob_is_degraded ...passed 00:06:08.666 Suite: blob_esnap_bs_copy_noextent 00:06:08.666 Test: blob_esnap_create ...passed 00:06:08.666 Test: blob_esnap_thread_add_remove ...passed 00:06:08.925 Test: blob_esnap_clone_snapshot ...passed 00:06:08.925 Test: blob_esnap_clone_inflate ...passed 00:06:08.925 Test: blob_esnap_clone_decouple ...passed 00:06:08.925 Test: blob_esnap_clone_reload ...passed 00:06:08.925 Test: blob_esnap_hotplug ...passed 00:06:08.925 Suite: blob_copy_extent 00:06:08.925 Test: blob_init ...[2024-07-13 07:52:14.588966] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:08.925 passed 00:06:08.925 Test: blob_thin_provision ...passed 00:06:08.925 Test: blob_read_only ...passed 00:06:08.925 Test: bs_load ...[2024-07-13 07:52:14.624133] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:08.925 passed 00:06:08.925 Test: bs_load_custom_cluster_size ...passed 00:06:08.925 Test: bs_load_after_failed_grow ...passed 00:06:08.926 Test: bs_cluster_sz ...[2024-07-13 07:52:14.640586] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:08.926 [2024-07-13 07:52:14.640687] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:08.926 [2024-07-13 07:52:14.640714] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:08.926 passed 00:06:08.926 Test: bs_resize_md ...passed 00:06:08.926 Test: bs_destroy ...passed 00:06:08.926 Test: bs_type ...passed 00:06:08.926 Test: bs_super_block ...passed 00:06:08.926 Test: bs_test_recover_cluster_count ...passed 00:06:08.926 Test: bs_grow_live ...passed 00:06:08.926 Test: bs_grow_live_no_space ...passed 00:06:08.926 Test: bs_test_grow ...passed 00:06:08.926 Test: blob_serialize_test ...passed 00:06:08.926 Test: super_block_crc ...passed 00:06:08.926 Test: blob_thin_prov_write_count_io ...passed 00:06:08.926 Test: bs_load_iter_test ...passed 00:06:09.184 Test: blob_relations ...[2024-07-13 07:52:14.743774] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:09.184 [2024-07-13 07:52:14.743855] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.184 [2024-07-13 07:52:14.744504] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:09.184 [2024-07-13 07:52:14.744543] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.184 passed 00:06:09.184 Test: blob_relations2 ...[2024-07-13 07:52:14.754585] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:09.184 [2024-07-13 07:52:14.754656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.184 [2024-07-13 07:52:14.754705] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:09.184 [2024-07-13 07:52:14.754728] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.184 [2024-07-13 07:52:14.755674] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:09.184 [2024-07-13 07:52:14.755715] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.184 [2024-07-13 07:52:14.756021] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:09.184 [2024-07-13 07:52:14.756055] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.184 passed 00:06:09.184 Test: blob_relations3 ...passed 00:06:09.184 Test: blobstore_clean_power_failure ...passed 00:06:09.184 Test: blob_delete_snapshot_power_failure ...[2024-07-13 07:52:14.863982] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:09.184 [2024-07-13 07:52:14.872635] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:09.184 [2024-07-13 07:52:14.881316] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:09.184 [2024-07-13 07:52:14.881394] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:09.184 [2024-07-13 07:52:14.881418] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.184 [2024-07-13 07:52:14.892668] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:09.184 [2024-07-13 07:52:14.892728] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:09.184 [2024-07-13 07:52:14.892748] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:09.184 [2024-07-13 07:52:14.892766] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.184 [2024-07-13 07:52:14.906263] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:09.184 [2024-07-13 07:52:14.906392] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:09.184 [2024-07-13 07:52:14.906442] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:09.184 [2024-07-13 07:52:14.906832] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.184 [2024-07-13 07:52:14.915640] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:09.184 [2024-07-13 07:52:14.915714] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.184 [2024-07-13 07:52:14.924251] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:09.184 [2024-07-13 07:52:14.924326] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.184 [2024-07-13 07:52:14.933028] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:09.184 [2024-07-13 07:52:14.933094] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.184 passed 00:06:09.184 Test: blob_create_snapshot_power_failure ...[2024-07-13 07:52:14.958609] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:09.184 [2024-07-13 07:52:14.966809] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:09.184 [2024-07-13 07:52:14.983254] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:09.443 [2024-07-13 07:52:14.998652] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:09.443 passed 00:06:09.443 Test: blob_io_unit ...passed 00:06:09.443 Test: blob_io_unit_compatibility ...passed 00:06:09.443 Test: blob_ext_md_pages ...passed 00:06:09.443 Test: blob_esnap_io_4096_4096 ...passed 00:06:09.443 Test: blob_esnap_io_512_512 ...passed 00:06:09.443 Test: blob_esnap_io_4096_512 ...passed 00:06:09.443 Test: blob_esnap_io_512_4096 ...passed 00:06:09.443 Suite: blob_bs_copy_extent 00:06:09.443 Test: blob_open ...passed 00:06:09.443 Test: blob_create ...[2024-07-13 07:52:15.188161] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:09.443 passed 00:06:09.443 Test: blob_create_loop ...passed 00:06:09.701 Test: blob_create_fail ...[2024-07-13 07:52:15.266076] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:09.701 passed 00:06:09.701 Test: blob_create_internal ...passed 00:06:09.701 Test: blob_create_zero_extent ...passed 00:06:09.701 Test: blob_snapshot ...passed 00:06:09.701 Test: blob_clone ...passed 00:06:09.701 Test: blob_inflate ...[2024-07-13 07:52:15.396148] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:09.701 passed 00:06:09.701 Test: blob_delete ...passed 00:06:09.701 Test: blob_resize_test ...[2024-07-13 07:52:15.446907] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:09.702 passed 00:06:09.702 Test: channel_ops ...passed 00:06:09.960 Test: blob_super ...passed 00:06:09.960 Test: blob_rw_verify_iov ...passed 00:06:09.960 Test: blob_unmap ...passed 00:06:09.960 Test: blob_iter ...passed 00:06:09.960 Test: blob_parse_md ...passed 00:06:09.960 Test: bs_load_pending_removal ...passed 00:06:09.960 Test: bs_unload ...[2024-07-13 07:52:15.661533] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:09.960 passed 00:06:09.960 Test: bs_usable_clusters ...passed 00:06:09.960 Test: blob_crc ...[2024-07-13 07:52:15.709301] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:09.960 [2024-07-13 07:52:15.709406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:09.960 passed 00:06:09.960 Test: blob_flags ...passed 00:06:09.960 Test: bs_version ...passed 00:06:10.219 Test: blob_set_xattrs_test ...[2024-07-13 07:52:15.787420] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:10.219 [2024-07-13 07:52:15.787669] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:10.219 passed 00:06:10.219 Test: blob_thin_prov_alloc ...passed 00:06:10.219 Test: blob_insert_cluster_msg_test ...passed 00:06:10.219 Test: blob_thin_prov_rw ...passed 00:06:10.219 Test: blob_thin_prov_rle ...passed 00:06:10.219 Test: blob_thin_prov_rw_iov ...passed 00:06:10.219 Test: blob_snapshot_rw ...passed 00:06:10.219 Test: blob_snapshot_rw_iov ...passed 00:06:10.478 Test: blob_inflate_rw ...passed 00:06:10.478 Test: blob_snapshot_freeze_io ...passed 00:06:10.478 Test: blob_operation_split_rw ...passed 00:06:10.736 Test: blob_operation_split_rw_iov ...passed 00:06:10.736 Test: blob_simultaneous_operations ...[2024-07-13 07:52:16.379095] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:10.736 [2024-07-13 07:52:16.379186] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:10.736 [2024-07-13 07:52:16.379742] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:10.736 [2024-07-13 07:52:16.379812] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:10.736 [2024-07-13 07:52:16.382289] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:10.736 [2024-07-13 07:52:16.382356] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:10.736 [2024-07-13 07:52:16.382514] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:10.736 [2024-07-13 07:52:16.382555] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:10.736 passed 00:06:10.736 Test: blob_persist_test ...passed 00:06:10.736 Test: blob_decouple_snapshot ...passed 00:06:10.736 Test: blob_seek_io_unit ...passed 00:06:10.736 Test: blob_nested_freezes ...passed 00:06:10.736 Suite: blob_blob_copy_extent 00:06:10.736 Test: blob_write ...passed 00:06:10.995 Test: blob_read ...passed 00:06:10.995 Test: blob_rw_verify ...passed 00:06:10.995 Test: blob_rw_verify_iov_nomem ...passed 00:06:10.995 Test: blob_rw_iov_read_only ...passed 00:06:10.995 Test: blob_xattr ...passed 00:06:10.995 Test: blob_dirty_shutdown ...passed 00:06:10.995 Test: blob_is_degraded ...passed 00:06:10.995 Suite: blob_esnap_bs_copy_extent 00:06:10.995 Test: blob_esnap_create ...passed 00:06:10.995 Test: blob_esnap_thread_add_remove ...passed 00:06:10.995 Test: blob_esnap_clone_snapshot ...passed 00:06:11.254 Test: blob_esnap_clone_inflate ...passed 00:06:11.254 Test: blob_esnap_clone_decouple ...passed 00:06:11.254 Test: blob_esnap_clone_reload ...passed 00:06:11.254 Test: blob_esnap_hotplug ...passed 00:06:11.254 00:06:11.254 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.254 suites 16 16 n/a 0 0 00:06:11.254 tests 348 348 348 0 0 00:06:11.254 asserts 92605 92605 92605 0 n/a 00:06:11.254 00:06:11.254 Elapsed time = 9.140 seconds 00:06:11.254 07:52:16 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:06:11.254 00:06:11.254 00:06:11.254 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.254 http://cunit.sourceforge.net/ 00:06:11.254 00:06:11.254 00:06:11.254 Suite: blob_bdev 00:06:11.254 Test: create_bs_dev ...passed 00:06:11.254 Test: create_bs_dev_ro ...passed 00:06:11.254 Test: create_bs_dev_rw ...passed 00:06:11.254 Test: claim_bs_dev ...passed 00:06:11.254 Test: claim_bs_dev_ro ...passed 00:06:11.254 Test: deferred_destroy_refs ...passed 00:06:11.254 Test: deferred_destroy_channels ...passed 00:06:11.254 Test: deferred_destroy_threads ...passed 00:06:11.254 00:06:11.254 [2024-07-13 07:52:16.970828] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:06:11.254 [2024-07-13 07:52:16.971182] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:06:11.254 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.254 suites 1 1 n/a 0 0 00:06:11.254 tests 8 8 8 0 0 00:06:11.254 asserts 119 119 119 0 n/a 00:06:11.254 00:06:11.254 Elapsed time = 0.000 seconds 00:06:11.254 07:52:16 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:06:11.254 00:06:11.254 00:06:11.254 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.254 http://cunit.sourceforge.net/ 00:06:11.254 00:06:11.254 00:06:11.254 Suite: tree 00:06:11.254 Test: blobfs_tree_op_test ...passed 00:06:11.254 00:06:11.254 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.254 suites 1 1 n/a 0 0 00:06:11.254 tests 1 1 1 0 0 00:06:11.254 asserts 27 27 27 0 n/a 00:06:11.254 00:06:11.254 Elapsed time = 0.000 seconds 00:06:11.254 07:52:17 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:06:11.254 00:06:11.254 00:06:11.254 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.254 http://cunit.sourceforge.net/ 00:06:11.254 00:06:11.254 00:06:11.254 Suite: blobfs_async_ut 00:06:11.254 Test: fs_init ...passed 00:06:11.514 Test: fs_open ...passed 00:06:11.514 Test: fs_create ...passed 00:06:11.514 Test: fs_truncate ...passed 00:06:11.514 Test: fs_rename ...[2024-07-13 07:52:17.111787] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:06:11.514 passed 00:06:11.514 Test: fs_rw_async ...passed 00:06:11.514 Test: fs_writev_readv_async ...passed 00:06:11.514 Test: tree_find_buffer_ut ...passed 00:06:11.514 Test: channel_ops ...passed 00:06:11.514 Test: channel_ops_sync ...passed 00:06:11.514 00:06:11.514 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.514 suites 1 1 n/a 0 0 00:06:11.514 tests 10 10 10 0 0 00:06:11.514 asserts 292 292 292 0 n/a 00:06:11.514 00:06:11.514 Elapsed time = 0.120 seconds 00:06:11.514 07:52:17 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:06:11.514 00:06:11.514 00:06:11.514 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.514 http://cunit.sourceforge.net/ 00:06:11.514 00:06:11.514 00:06:11.514 Suite: blobfs_sync_ut 00:06:11.514 Test: cache_read_after_write ...passed 00:06:11.514 Test: file_length ...[2024-07-13 07:52:17.240851] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:06:11.514 passed 00:06:11.514 Test: append_write_to_extend_blob ...passed 00:06:11.514 Test: partial_buffer ...passed 00:06:11.514 Test: cache_write_null_buffer ...passed 00:06:11.514 Test: fs_create_sync ...passed 00:06:11.514 Test: fs_rename_sync ...passed 00:06:11.514 Test: cache_append_no_cache ...passed 00:06:11.774 Test: fs_delete_file_without_close ...passed 00:06:11.774 00:06:11.774 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.774 suites 1 1 n/a 0 0 00:06:11.774 tests 9 9 9 0 0 00:06:11.774 asserts 345 345 345 0 n/a 00:06:11.774 00:06:11.774 Elapsed time = 0.240 seconds 00:06:11.774 07:52:17 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:06:11.774 00:06:11.775 00:06:11.775 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.775 http://cunit.sourceforge.net/ 00:06:11.775 00:06:11.775 00:06:11.775 Suite: blobfs_bdev_ut 00:06:11.775 Test: spdk_blobfs_bdev_detect_test ...passed 00:06:11.775 Test: spdk_blobfs_bdev_create_test ...passed 00:06:11.775 Test: spdk_blobfs_bdev_mount_test ...passed 00:06:11.775 00:06:11.775 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.775 suites 1 1 n/a 0 0 00:06:11.775 tests 3 3 3 0 0 00:06:11.775 asserts 9 9 9 0 n/a 00:06:11.775 00:06:11.775 Elapsed time = 0.000 seconds 00:06:11.775 [2024-07-13 07:52:17.385067] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:06:11.775 [2024-07-13 07:52:17.385333] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:06:11.775 00:06:11.775 real 0m9.731s 00:06:11.775 user 0m9.247s 00:06:11.775 sys 0m0.566s 00:06:11.775 ************************************ 00:06:11.775 END TEST unittest_blob_blobfs 00:06:11.775 ************************************ 00:06:11.775 07:52:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.775 07:52:17 -- common/autotest_common.sh@10 -- # set +x 00:06:11.775 07:52:17 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:06:11.775 07:52:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:11.775 07:52:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.775 07:52:17 -- common/autotest_common.sh@10 -- # set +x 00:06:11.775 ************************************ 00:06:11.775 START TEST unittest_event 00:06:11.775 ************************************ 00:06:11.775 07:52:17 -- common/autotest_common.sh@1104 -- # unittest_event 00:06:11.775 07:52:17 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:06:11.775 00:06:11.775 00:06:11.775 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.775 http://cunit.sourceforge.net/ 00:06:11.775 00:06:11.775 00:06:11.775 Suite: app_suite 00:06:11.775 Test: test_spdk_app_parse_args ...app_ut [options] 00:06:11.775 options: 00:06:11.775 -c, --config JSON config file (default none) 00:06:11.775 --json JSON config file (default none) 00:06:11.775 --json-ignore-init-errors 00:06:11.775 don't exit on invalid config entry 00:06:11.775 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:11.775 -g, --single-file-segments 00:06:11.775 force creating just one hugetlbfs file 00:06:11.775 -h, --help show this usage 00:06:11.775 -i, --shm-id shared memory ID (optional) 00:06:11.775 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:11.775 --lcores lcore to CPU mapping list. The list is in the format: 00:06:11.775 [<,lcores[@CPUs]>...] 00:06:11.775 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:11.775 Within the group, '-' is used for range separator, 00:06:11.775 ',' is used for single number separator. 00:06:11.775 '( )' can be omitted for single element group, 00:06:11.775 '@' can be omitted if cpus and lcores have the same value 00:06:11.775 -n, --mem-channels channel number of memory channels used for DPDK 00:06:11.775 -p, --main-core main (primary) core for DPDK 00:06:11.775 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:11.775 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:11.775 --disable-cpumask-locks Disable CPU core lock files. 00:06:11.775 --silence-noticelog disable notice level logging to stderr 00:06:11.775 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:11.775 -u, --no-pci disable PCI access 00:06:11.775 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:11.775 --max-delay maximum reactor delay (in microseconds) 00:06:11.775 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:11.775 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:11.775 -R, --huge-unlink unlink huge files after initialization 00:06:11.775 -v, --version print SPDK version 00:06:11.775 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:11.775 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:11.775 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:11.775 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:11.775 Tracepoints vary in size and can use more than one trace entry. 00:06:11.775 --rpcs-allowed comma-separated list of permitted RPCS 00:06:11.775 --env-context Opaque context for use of the env implementation 00:06:11.775 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:11.775 --no-huge run without using hugepages 00:06:11.775 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:11.775 -e, --tpoint-group [:] 00:06:11.775 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:11.775 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:11.775 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:11.775 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:11.775 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:11.775 app_ut [options] 00:06:11.775 options: 00:06:11.775 -c, --config JSON config file (default none) 00:06:11.775 --json JSON config file (default none) 00:06:11.775 --json-ignore-init-errors 00:06:11.775 don't exit on invalid config entry 00:06:11.775 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:11.775 -g, --single-file-segments 00:06:11.775 force creating just one hugetlbfs file 00:06:11.775 -h, --help show this usage 00:06:11.775 -i, --shm-id shared memory ID (optional) 00:06:11.775 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:11.775 --lcores lcore to CPU mapping list. The list is in the format: 00:06:11.775 [<,lcores[@CPUs]>...] 00:06:11.775 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:11.775 Within the group, '-' is used for range separator, 00:06:11.775 ',' is used for single number separator. 00:06:11.775 '( )' can be omitted for single element group, 00:06:11.775 '@' can be omitted if cpus and lcores have the same value 00:06:11.775 -n, --mem-channels channel number of memory channels used for DPDK 00:06:11.775 -p, --main-core main (primary) core for DPDK 00:06:11.775 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:11.775 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:11.775 --disable-cpumask-locks Disable CPU core lock files. 00:06:11.775 --silence-noticelog disable notice level logging to stderr 00:06:11.775 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:11.775 -u, --no-pci disable PCI access 00:06:11.775 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:11.775 --max-delay maximum reactor delay (in microseconds) 00:06:11.775 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:11.775 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:11.775 -R, --huge-unlink unlink huge files after initialization 00:06:11.775 -v, --version print SPDK version 00:06:11.775 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:11.775 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:11.775 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:11.775 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:11.775 Tracepoints vary in size and can use more than one trace entry. 00:06:11.775 --rpcs-allowed comma-separated list of permitted RPCS 00:06:11.775 --env-context Opaque context for use of the env implementation 00:06:11.775 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:11.775 --no-huge run without using hugepages 00:06:11.775 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:11.775 -e, --tpoint-group [:] 00:06:11.775 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:11.775 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:11.775 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:11.775 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:11.775 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:11.775 app_ut [options] 00:06:11.775 options: 00:06:11.775 -c, --config JSON config file (default none) 00:06:11.775 --json JSON config file (default none) 00:06:11.775 --json-ignore-init-errors 00:06:11.775 don't exit on invalid config entry 00:06:11.775 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:11.775 -g, --single-file-segments 00:06:11.775 force creating just one hugetlbfs file 00:06:11.775 -h, --help show this usage 00:06:11.775 -i, --shm-id shared memory ID (optional) 00:06:11.775 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:11.775 --lcores lcore to CPU mapping list. The list is in the format: 00:06:11.775 [<,lcores[@CPUs]>...] 00:06:11.775 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:11.775 Within the group, '-' is used for range separator, 00:06:11.775 ',' is used for single number separator. 00:06:11.775 '( )' can be omitted for single element group, 00:06:11.775 '@' can be omitted if cpus and lcores have the same value 00:06:11.775 -n, --mem-channels channel number of memory channels used for DPDK 00:06:11.775 -p, --main-core main (primary) core for DPDK 00:06:11.776 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:11.776 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:11.776 --disable-cpumask-locks Disable CPU core lock files. 00:06:11.776 --silence-noticelog disable notice level logging to stderr 00:06:11.776 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:11.776 -u, --no-pci disable PCI access 00:06:11.776 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:11.776 --max-delay maximum reactor delay (in microseconds) 00:06:11.776 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:11.776 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:11.776 -R, --huge-unlink unlink huge files after initialization 00:06:11.776 -v, --version print SPDK version 00:06:11.776 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:11.776 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:11.776 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:11.776 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:11.776 app_ut: invalid option -- 'z' 00:06:11.776 app_ut: unrecognized option '--test-long-opt' 00:06:11.776 [2024-07-13 07:52:17.463715] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:06:11.776 [2024-07-13 07:52:17.464002] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:06:11.776 Tracepoints vary in size and can use more than one trace entry. 00:06:11.776 --rpcs-allowed comma-separated list of permitted RPCS 00:06:11.776 --env-context Opaque context for use of the env implementation 00:06:11.776 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:11.776 --no-huge run without using hugepages 00:06:11.776 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:11.776 -e, --tpoint-group [:] 00:06:11.776 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:11.776 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:11.776 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:11.776 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:11.776 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:11.776 passed 00:06:11.776 00:06:11.776 [2024-07-13 07:52:17.464292] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:06:11.776 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.776 suites 1 1 n/a 0 0 00:06:11.776 tests 1 1 1 0 0 00:06:11.776 asserts 8 8 8 0 n/a 00:06:11.776 00:06:11.776 Elapsed time = 0.000 seconds 00:06:11.776 07:52:17 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:06:11.776 00:06:11.776 00:06:11.776 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.776 http://cunit.sourceforge.net/ 00:06:11.776 00:06:11.776 00:06:11.776 Suite: app_suite 00:06:11.776 Test: test_create_reactor ...passed 00:06:11.776 Test: test_init_reactors ...passed 00:06:11.776 Test: test_event_call ...passed 00:06:11.776 Test: test_schedule_thread ...passed 00:06:11.776 Test: test_reschedule_thread ...passed 00:06:11.776 Test: test_bind_thread ...passed 00:06:11.776 Test: test_for_each_reactor ...passed 00:06:11.776 Test: test_reactor_stats ...passed 00:06:11.776 Test: test_scheduler ...passed 00:06:11.776 Test: test_governor ...passed 00:06:11.776 00:06:11.776 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.776 suites 1 1 n/a 0 0 00:06:11.776 tests 10 10 10 0 0 00:06:11.776 asserts 344 344 344 0 n/a 00:06:11.776 00:06:11.776 Elapsed time = 0.010 seconds 00:06:11.776 ************************************ 00:06:11.776 END TEST unittest_event 00:06:11.776 ************************************ 00:06:11.776 00:06:11.776 real 0m0.075s 00:06:11.776 user 0m0.040s 00:06:11.776 sys 0m0.036s 00:06:11.776 07:52:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.776 07:52:17 -- common/autotest_common.sh@10 -- # set +x 00:06:11.776 07:52:17 -- unit/unittest.sh@233 -- # uname -s 00:06:11.776 07:52:17 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:06:11.776 07:52:17 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:06:11.776 07:52:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:11.776 07:52:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.776 07:52:17 -- common/autotest_common.sh@10 -- # set +x 00:06:11.776 ************************************ 00:06:11.776 START TEST unittest_ftl 00:06:11.776 ************************************ 00:06:11.776 07:52:17 -- common/autotest_common.sh@1104 -- # unittest_ftl 00:06:11.776 07:52:17 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:06:12.036 00:06:12.036 00:06:12.036 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.036 http://cunit.sourceforge.net/ 00:06:12.036 00:06:12.036 00:06:12.036 Suite: ftl_band_suite 00:06:12.036 Test: test_band_block_offset_from_addr_base ...passed 00:06:12.036 Test: test_band_block_offset_from_addr_offset ...passed 00:06:12.036 Test: test_band_addr_from_block_offset ...passed 00:06:12.036 Test: test_band_set_addr ...passed 00:06:12.036 Test: test_invalidate_addr ...passed 00:06:12.036 Test: test_next_xfer_addr ...passed 00:06:12.036 00:06:12.036 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.036 suites 1 1 n/a 0 0 00:06:12.036 tests 6 6 6 0 0 00:06:12.036 asserts 30356 30356 30356 0 n/a 00:06:12.036 00:06:12.036 Elapsed time = 0.160 seconds 00:06:12.036 07:52:17 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:06:12.036 00:06:12.036 00:06:12.036 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.036 http://cunit.sourceforge.net/ 00:06:12.036 00:06:12.036 00:06:12.036 Suite: ftl_bitmap 00:06:12.036 Test: test_ftl_bitmap_create ...[2024-07-13 07:52:17.830970] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:06:12.036 [2024-07-13 07:52:17.831344] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:06:12.036 passed 00:06:12.036 Test: test_ftl_bitmap_get ...passed 00:06:12.036 Test: test_ftl_bitmap_set ...passed 00:06:12.036 Test: test_ftl_bitmap_clear ...passed 00:06:12.036 Test: test_ftl_bitmap_find_first_set ...passed 00:06:12.036 Test: test_ftl_bitmap_find_first_clear ...passed 00:06:12.036 Test: test_ftl_bitmap_count_set ...passed 00:06:12.036 00:06:12.036 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.036 suites 1 1 n/a 0 0 00:06:12.036 tests 7 7 7 0 0 00:06:12.036 asserts 137 137 137 0 n/a 00:06:12.036 00:06:12.036 Elapsed time = 0.000 seconds 00:06:12.036 07:52:17 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:06:12.296 00:06:12.296 00:06:12.296 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.296 http://cunit.sourceforge.net/ 00:06:12.296 00:06:12.296 00:06:12.296 Suite: ftl_io_suite 00:06:12.296 Test: test_completion ...passed 00:06:12.296 Test: test_multiple_ios ...passed 00:06:12.296 00:06:12.296 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.296 suites 1 1 n/a 0 0 00:06:12.296 tests 2 2 2 0 0 00:06:12.296 asserts 47 47 47 0 n/a 00:06:12.296 00:06:12.296 Elapsed time = 0.000 seconds 00:06:12.296 07:52:17 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:06:12.296 00:06:12.297 00:06:12.297 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.297 http://cunit.sourceforge.net/ 00:06:12.297 00:06:12.297 00:06:12.297 Suite: ftl_mngt 00:06:12.297 Test: test_next_step ...passed 00:06:12.297 Test: test_continue_step ...passed 00:06:12.297 Test: test_get_func_and_step_cntx_alloc ...passed 00:06:12.297 Test: test_fail_step ...passed 00:06:12.297 Test: test_mngt_call_and_call_rollback ...passed 00:06:12.297 Test: test_nested_process_failure ...passed 00:06:12.297 00:06:12.297 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.297 suites 1 1 n/a 0 0 00:06:12.297 tests 6 6 6 0 0 00:06:12.297 asserts 176 176 176 0 n/a 00:06:12.297 00:06:12.297 Elapsed time = 0.000 seconds 00:06:12.297 07:52:17 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:06:12.297 00:06:12.297 00:06:12.297 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.297 http://cunit.sourceforge.net/ 00:06:12.297 00:06:12.297 00:06:12.297 Suite: ftl_mempool 00:06:12.297 Test: test_ftl_mempool_create ...passed 00:06:12.297 Test: test_ftl_mempool_get_put ...passed 00:06:12.297 00:06:12.297 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.297 suites 1 1 n/a 0 0 00:06:12.297 tests 2 2 2 0 0 00:06:12.297 asserts 36 36 36 0 n/a 00:06:12.297 00:06:12.297 Elapsed time = 0.000 seconds 00:06:12.297 07:52:17 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:06:12.297 00:06:12.297 00:06:12.297 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.297 http://cunit.sourceforge.net/ 00:06:12.297 00:06:12.297 00:06:12.297 Suite: ftl_addr64_suite 00:06:12.297 Test: test_addr_cached ...passed 00:06:12.297 00:06:12.297 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.297 suites 1 1 n/a 0 0 00:06:12.297 tests 1 1 1 0 0 00:06:12.297 asserts 1536 1536 1536 0 n/a 00:06:12.297 00:06:12.297 Elapsed time = 0.000 seconds 00:06:12.297 07:52:17 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:06:12.297 00:06:12.297 00:06:12.297 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.297 http://cunit.sourceforge.net/ 00:06:12.297 00:06:12.297 00:06:12.297 Suite: ftl_sb 00:06:12.297 Test: test_sb_crc_v2 ...passed 00:06:12.297 Test: test_sb_crc_v3 ...passed 00:06:12.297 Test: test_sb_v3_md_layout ...[2024-07-13 07:52:17.948932] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:06:12.297 [2024-07-13 07:52:17.949193] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:12.297 [2024-07-13 07:52:17.949233] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:12.297 [2024-07-13 07:52:17.949270] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:12.297 [2024-07-13 07:52:17.949304] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:06:12.297 [2024-07-13 07:52:17.949376] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:06:12.297 [2024-07-13 07:52:17.949403] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:06:12.297 [2024-07-13 07:52:17.949447] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:06:12.297 [2024-07-13 07:52:17.949511] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:06:12.297 passed 00:06:12.297 Test: test_sb_v5_md_layout ...[2024-07-13 07:52:17.949547] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:06:12.297 [2024-07-13 07:52:17.949573] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:06:12.297 passed 00:06:12.297 00:06:12.297 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.297 suites 1 1 n/a 0 0 00:06:12.297 tests 4 4 4 0 0 00:06:12.297 asserts 148 148 148 0 n/a 00:06:12.297 00:06:12.297 Elapsed time = 0.000 seconds 00:06:12.297 07:52:17 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:06:12.297 00:06:12.297 00:06:12.297 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.297 http://cunit.sourceforge.net/ 00:06:12.297 00:06:12.297 00:06:12.297 Suite: ftl_layout_upgrade 00:06:12.297 Test: test_l2p_upgrade ...passed 00:06:12.297 00:06:12.297 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.297 suites 1 1 n/a 0 0 00:06:12.297 tests 1 1 1 0 0 00:06:12.297 asserts 140 140 140 0 n/a 00:06:12.297 00:06:12.297 Elapsed time = 0.000 seconds 00:06:12.297 ************************************ 00:06:12.297 END TEST unittest_ftl 00:06:12.297 ************************************ 00:06:12.297 00:06:12.297 real 0m0.418s 00:06:12.297 user 0m0.190s 00:06:12.297 sys 0m0.231s 00:06:12.297 07:52:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.297 07:52:17 -- common/autotest_common.sh@10 -- # set +x 00:06:12.297 07:52:18 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:06:12.297 07:52:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.297 07:52:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.297 07:52:18 -- common/autotest_common.sh@10 -- # set +x 00:06:12.297 ************************************ 00:06:12.297 START TEST unittest_accel 00:06:12.297 ************************************ 00:06:12.297 07:52:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:06:12.297 00:06:12.297 00:06:12.297 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.297 http://cunit.sourceforge.net/ 00:06:12.297 00:06:12.297 00:06:12.297 Suite: accel_sequence 00:06:12.297 Test: test_sequence_fill_copy ...passed 00:06:12.297 Test: test_sequence_abort ...passed 00:06:12.297 Test: test_sequence_append_error ...passed 00:06:12.297 Test: test_sequence_completion_error ...passed 00:06:12.297 Test: test_sequence_copy_elision ...passed 00:06:12.297 Test: test_sequence_accel_buffers ...passed 00:06:12.297 Test: test_sequence_memory_domain ...[2024-07-13 07:52:18.056478] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fba51a4f7c0 00:06:12.297 [2024-07-13 07:52:18.056666] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7fba51a4f7c0 00:06:12.297 [2024-07-13 07:52:18.056691] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7fba51a4f7c0 00:06:12.297 [2024-07-13 07:52:18.056725] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7fba51a4f7c0 00:06:12.297 passed 00:06:12.297 Test: test_sequence_module_memory_domain ...passed 00:06:12.297 Test: test_sequence_driver ...[2024-07-13 07:52:18.059990] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:06:12.297 [2024-07-13 07:52:18.060072] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:06:12.297 passed 00:06:12.297 Test: test_sequence_same_iovs ...passed 00:06:12.297 Test: test_sequence_crc32 ...[2024-07-13 07:52:18.062077] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7fba512767c0 using driver: ut 00:06:12.297 [2024-07-13 07:52:18.062148] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fba512767c0 through driver: ut 00:06:12.297 passed 00:06:12.297 Suite: accel 00:06:12.297 Test: test_spdk_accel_task_complete ...passed 00:06:12.297 Test: test_get_task ...passed 00:06:12.297 Test: test_spdk_accel_submit_copy ...passed 00:06:12.297 Test: test_spdk_accel_submit_dualcast ...[2024-07-13 07:52:18.064476] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:06:12.297 [2024-07-13 07:52:18.064516] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:06:12.297 passed 00:06:12.297 Test: test_spdk_accel_submit_compare ...passed 00:06:12.297 Test: test_spdk_accel_submit_fill ...passed 00:06:12.297 Test: test_spdk_accel_submit_crc32c ...passed 00:06:12.297 Test: test_spdk_accel_submit_crc32cv ...passed 00:06:12.297 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:06:12.297 Test: test_spdk_accel_submit_xor ...passed 00:06:12.297 Test: test_spdk_accel_module_find_by_name ...passed 00:06:12.297 Test: test_spdk_accel_module_register ...passed 00:06:12.297 00:06:12.297 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.297 suites 2 2 n/a 0 0 00:06:12.297 tests 23 23 23 0 0 00:06:12.297 asserts 754 754 754 0 n/a 00:06:12.297 00:06:12.297 Elapsed time = 0.010 seconds 00:06:12.297 ************************************ 00:06:12.297 END TEST unittest_accel 00:06:12.297 ************************************ 00:06:12.297 00:06:12.297 real 0m0.040s 00:06:12.297 user 0m0.018s 00:06:12.297 sys 0m0.022s 00:06:12.297 07:52:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.297 07:52:18 -- common/autotest_common.sh@10 -- # set +x 00:06:12.557 07:52:18 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:06:12.557 07:52:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.557 07:52:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.557 07:52:18 -- common/autotest_common.sh@10 -- # set +x 00:06:12.557 ************************************ 00:06:12.557 START TEST unittest_ioat 00:06:12.557 ************************************ 00:06:12.557 07:52:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:06:12.557 00:06:12.557 00:06:12.557 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.557 http://cunit.sourceforge.net/ 00:06:12.557 00:06:12.557 00:06:12.557 Suite: ioat 00:06:12.557 Test: ioat_state_check ...passed 00:06:12.557 00:06:12.557 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.557 suites 1 1 n/a 0 0 00:06:12.557 tests 1 1 1 0 0 00:06:12.557 asserts 32 32 32 0 n/a 00:06:12.557 00:06:12.557 Elapsed time = 0.000 seconds 00:06:12.557 00:06:12.557 real 0m0.027s 00:06:12.557 user 0m0.014s 00:06:12.557 sys 0m0.013s 00:06:12.557 07:52:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.557 07:52:18 -- common/autotest_common.sh@10 -- # set +x 00:06:12.557 ************************************ 00:06:12.557 END TEST unittest_ioat 00:06:12.557 ************************************ 00:06:12.557 07:52:18 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:12.557 07:52:18 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:06:12.557 07:52:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.557 07:52:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.557 07:52:18 -- common/autotest_common.sh@10 -- # set +x 00:06:12.557 ************************************ 00:06:12.557 START TEST unittest_idxd_user 00:06:12.557 ************************************ 00:06:12.557 07:52:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:06:12.557 00:06:12.557 00:06:12.557 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.557 http://cunit.sourceforge.net/ 00:06:12.557 00:06:12.557 00:06:12.557 Suite: idxd_user 00:06:12.557 Test: test_idxd_wait_cmd ...passed 00:06:12.557 Test: test_idxd_reset_dev ...passed 00:06:12.557 Test: test_idxd_group_config ...passed 00:06:12.557 Test: test_idxd_wq_config ...passed 00:06:12.557 00:06:12.557 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.557 suites 1 1 n/a 0 0 00:06:12.557 tests 4 4 4 0 0 00:06:12.557 asserts 20 20 20 0 n/a 00:06:12.557 00:06:12.557 Elapsed time = 0.000 seconds 00:06:12.557 [2024-07-13 07:52:18.220594] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:06:12.557 [2024-07-13 07:52:18.220756] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:06:12.557 [2024-07-13 07:52:18.220817] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:06:12.557 [2024-07-13 07:52:18.220845] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:06:12.557 00:06:12.557 real 0m0.030s 00:06:12.557 user 0m0.013s 00:06:12.557 sys 0m0.017s 00:06:12.557 07:52:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.557 07:52:18 -- common/autotest_common.sh@10 -- # set +x 00:06:12.557 ************************************ 00:06:12.557 END TEST unittest_idxd_user 00:06:12.557 ************************************ 00:06:12.557 07:52:18 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:06:12.557 07:52:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.557 07:52:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.557 07:52:18 -- common/autotest_common.sh@10 -- # set +x 00:06:12.557 ************************************ 00:06:12.557 START TEST unittest_iscsi 00:06:12.557 ************************************ 00:06:12.557 07:52:18 -- common/autotest_common.sh@1104 -- # unittest_iscsi 00:06:12.557 07:52:18 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:06:12.557 00:06:12.557 00:06:12.557 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.557 http://cunit.sourceforge.net/ 00:06:12.557 00:06:12.557 00:06:12.557 Suite: conn_suite 00:06:12.557 Test: read_task_split_in_order_case ...passed 00:06:12.557 Test: read_task_split_reverse_order_case ...passed 00:06:12.557 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:06:12.557 Test: process_non_read_task_completion_test ...passed 00:06:12.557 Test: free_tasks_on_connection ...passed 00:06:12.557 Test: free_tasks_with_queued_datain ...passed 00:06:12.557 Test: abort_queued_datain_task_test ...passed 00:06:12.557 Test: abort_queued_datain_tasks_test ...passed 00:06:12.557 00:06:12.557 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.557 suites 1 1 n/a 0 0 00:06:12.557 tests 8 8 8 0 0 00:06:12.557 asserts 230 230 230 0 n/a 00:06:12.557 00:06:12.557 Elapsed time = 0.000 seconds 00:06:12.557 07:52:18 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:06:12.557 00:06:12.557 00:06:12.557 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.557 http://cunit.sourceforge.net/ 00:06:12.557 00:06:12.557 00:06:12.557 Suite: iscsi_suite 00:06:12.557 Test: param_negotiation_test ...passed 00:06:12.557 Test: list_negotiation_test ...passed 00:06:12.557 Test: parse_valid_test ...passed 00:06:12.557 Test: parse_invalid_test ...[2024-07-13 07:52:18.332557] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:06:12.558 [2024-07-13 07:52:18.332756] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:06:12.558 [2024-07-13 07:52:18.332804] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:06:12.558 [2024-07-13 07:52:18.332867] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:06:12.558 [2024-07-13 07:52:18.332983] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:06:12.558 passed 00:06:12.558 00:06:12.558 [2024-07-13 07:52:18.333078] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:06:12.558 [2024-07-13 07:52:18.333188] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:06:12.558 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.558 suites 1 1 n/a 0 0 00:06:12.558 tests 4 4 4 0 0 00:06:12.558 asserts 161 161 161 0 n/a 00:06:12.558 00:06:12.558 Elapsed time = 0.010 seconds 00:06:12.558 07:52:18 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:06:12.558 00:06:12.558 00:06:12.558 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.558 http://cunit.sourceforge.net/ 00:06:12.558 00:06:12.558 00:06:12.558 Suite: iscsi_target_node_suite 00:06:12.558 Test: add_lun_test_cases ...passed 00:06:12.558 Test: allow_any_allowed ...passed 00:06:12.558 Test: allow_ipv6_allowed ...passed 00:06:12.558 Test: allow_ipv6_denied ...passed 00:06:12.558 Test: allow_ipv6_invalid ...passed 00:06:12.558 Test: allow_ipv4_allowed ...passed 00:06:12.558 Test: allow_ipv4_denied ...passed 00:06:12.558 Test: allow_ipv4_invalid ...passed 00:06:12.558 Test: node_access_allowed ...[2024-07-13 07:52:18.368075] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:06:12.558 [2024-07-13 07:52:18.368244] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:06:12.558 [2024-07-13 07:52:18.368297] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:06:12.558 [2024-07-13 07:52:18.368322] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:06:12.558 [2024-07-13 07:52:18.368338] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:06:12.818 passed 00:06:12.818 Test: node_access_denied_by_empty_netmask ...passed 00:06:12.818 Test: node_access_multi_initiator_groups_cases ...passed 00:06:12.818 Test: allow_iscsi_name_multi_maps_case ...passed 00:06:12.818 Test: chap_param_test_cases ...passed 00:06:12.818 00:06:12.818 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.818 suites 1 1 n/a 0 0 00:06:12.818 tests 13 13 13 0 0 00:06:12.818 asserts 50 50 50 0 n/a 00:06:12.818 00:06:12.818 Elapsed time = 0.000 seconds 00:06:12.818 [2024-07-13 07:52:18.368808] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:06:12.818 [2024-07-13 07:52:18.368842] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:06:12.818 [2024-07-13 07:52:18.368881] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:06:12.818 [2024-07-13 07:52:18.368901] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:06:12.818 [2024-07-13 07:52:18.368923] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:06:12.818 07:52:18 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:06:12.818 00:06:12.818 00:06:12.818 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.818 http://cunit.sourceforge.net/ 00:06:12.818 00:06:12.818 00:06:12.818 Suite: iscsi_suite 00:06:12.818 Test: op_login_check_target_test ...passed 00:06:12.818 Test: op_login_session_normal_test ...[2024-07-13 07:52:18.402774] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:06:12.818 [2024-07-13 07:52:18.403054] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:12.818 [2024-07-13 07:52:18.403099] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:12.818 [2024-07-13 07:52:18.403149] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:12.818 [2024-07-13 07:52:18.403192] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:06:12.818 [2024-07-13 07:52:18.403300] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:06:12.818 [2024-07-13 07:52:18.403505] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:06:12.818 passed 00:06:12.818 Test: maxburstlength_test ...passed 00:06:12.818 Test: underflow_for_read_transfer_test ...passed 00:06:12.818 Test: underflow_for_zero_read_transfer_test ...passed 00:06:12.818 Test: underflow_for_request_sense_test ...passed 00:06:12.818 Test: underflow_for_check_condition_test ...passed 00:06:12.818 Test: add_transfer_task_test ...passed 00:06:12.818 Test: get_transfer_task_test ...passed 00:06:12.818 Test: del_transfer_task_test ...passed 00:06:12.818 Test: clear_all_transfer_tasks_test ...passed 00:06:12.818 Test: build_iovs_test ...passed 00:06:12.818 Test: build_iovs_with_md_test ...[2024-07-13 07:52:18.403688] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:06:12.818 [2024-07-13 07:52:18.403936] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:06:12.818 [2024-07-13 07:52:18.403987] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:06:12.818 passed 00:06:12.818 Test: pdu_hdr_op_login_test ...[2024-07-13 07:52:18.404705] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:06:12.818 [2024-07-13 07:52:18.404792] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:06:12.818 [2024-07-13 07:52:18.404843] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:06:12.818 passed 00:06:12.818 Test: pdu_hdr_op_text_test ...passed 00:06:12.818 Test: pdu_hdr_op_logout_test ...[2024-07-13 07:52:18.404901] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:06:12.818 [2024-07-13 07:52:18.404968] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:06:12.818 [2024-07-13 07:52:18.405008] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:06:12.818 [2024-07-13 07:52:18.405047] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:06:12.818 passed 00:06:12.818 Test: pdu_hdr_op_scsi_test ...passed 00:06:12.818 Test: pdu_hdr_op_task_mgmt_test ...passed 00:06:12.818 Test: pdu_hdr_op_nopout_test ...[2024-07-13 07:52:18.405177] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:06:12.818 [2024-07-13 07:52:18.405206] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:06:12.818 [2024-07-13 07:52:18.405244] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:06:12.818 [2024-07-13 07:52:18.405297] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:06:12.818 [2024-07-13 07:52:18.405341] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:06:12.818 [2024-07-13 07:52:18.405400] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:06:12.818 [2024-07-13 07:52:18.405466] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:06:12.818 [2024-07-13 07:52:18.405504] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:06:12.818 [2024-07-13 07:52:18.405592] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:06:12.818 [2024-07-13 07:52:18.405641] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:06:12.818 [2024-07-13 07:52:18.405667] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:06:12.818 passed 00:06:12.818 Test: pdu_hdr_op_data_test ...[2024-07-13 07:52:18.405699] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:06:12.818 [2024-07-13 07:52:18.405740] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:06:12.818 [2024-07-13 07:52:18.405784] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:06:12.818 [2024-07-13 07:52:18.405819] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:06:12.818 [2024-07-13 07:52:18.405867] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:06:12.818 [2024-07-13 07:52:18.405908] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:06:12.818 [2024-07-13 07:52:18.405943] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:06:12.818 [2024-07-13 07:52:18.405981] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:06:12.818 passed 00:06:12.818 Test: empty_text_with_cbit_test ...passed 00:06:12.818 Test: pdu_payload_read_test ...[2024-07-13 07:52:18.407113] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:06:12.818 passed 00:06:12.818 Test: data_out_pdu_sequence_test ...passed 00:06:12.818 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:06:12.818 00:06:12.818 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.818 suites 1 1 n/a 0 0 00:06:12.818 tests 24 24 24 0 0 00:06:12.818 asserts 150253 150253 150253 0 n/a 00:06:12.818 00:06:12.818 Elapsed time = 0.010 seconds 00:06:12.818 07:52:18 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:06:12.818 00:06:12.818 00:06:12.818 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.818 http://cunit.sourceforge.net/ 00:06:12.818 00:06:12.818 00:06:12.818 Suite: init_grp_suite 00:06:12.818 Test: create_initiator_group_success_case ...passed 00:06:12.818 Test: find_initiator_group_success_case ...passed 00:06:12.818 Test: register_initiator_group_twice_case ...passed 00:06:12.818 Test: add_initiator_name_success_case ...passed 00:06:12.818 Test: add_initiator_name_fail_case ...passed 00:06:12.818 Test: delete_all_initiator_names_success_case ...[2024-07-13 07:52:18.436357] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:06:12.818 passed 00:06:12.818 Test: add_netmask_success_case ...passed 00:06:12.818 Test: add_netmask_fail_case ...passed 00:06:12.818 Test: delete_all_netmasks_success_case ...passed 00:06:12.818 Test: initiator_name_overwrite_all_to_any_case ...passed 00:06:12.818 Test: netmask_overwrite_all_to_any_case ...passed 00:06:12.818 Test: add_delete_initiator_names_case ...passed 00:06:12.818 Test: add_duplicated_initiator_names_case ...passed 00:06:12.818 Test: delete_nonexisting_initiator_names_case ...passed 00:06:12.818 Test: add_delete_netmasks_case ...passed 00:06:12.818 Test: add_duplicated_netmasks_case ...passed 00:06:12.818 Test: delete_nonexisting_netmasks_case ...[2024-07-13 07:52:18.436702] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:06:12.818 passed 00:06:12.818 00:06:12.818 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.818 suites 1 1 n/a 0 0 00:06:12.818 tests 17 17 17 0 0 00:06:12.818 asserts 108 108 108 0 n/a 00:06:12.818 00:06:12.818 Elapsed time = 0.000 seconds 00:06:12.818 07:52:18 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:06:12.818 00:06:12.818 00:06:12.818 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.818 http://cunit.sourceforge.net/ 00:06:12.818 00:06:12.818 00:06:12.818 Suite: portal_grp_suite 00:06:12.818 Test: portal_create_ipv4_normal_case ...passed 00:06:12.818 Test: portal_create_ipv6_normal_case ...passed 00:06:12.818 Test: portal_create_ipv4_wildcard_case ...passed 00:06:12.818 Test: portal_create_ipv6_wildcard_case ...passed 00:06:12.818 Test: portal_create_twice_case ...passed 00:06:12.818 Test: portal_grp_register_unregister_case ...passed 00:06:12.818 Test: portal_grp_register_twice_case ...passed 00:06:12.819 Test: portal_grp_add_delete_case ...[2024-07-13 07:52:18.461305] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:06:12.819 passed 00:06:12.819 Test: portal_grp_add_delete_twice_case ...passed 00:06:12.819 00:06:12.819 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.819 suites 1 1 n/a 0 0 00:06:12.819 tests 9 9 9 0 0 00:06:12.819 asserts 44 44 44 0 n/a 00:06:12.819 00:06:12.819 Elapsed time = 0.000 seconds 00:06:12.819 00:06:12.819 real 0m0.193s 00:06:12.819 user 0m0.097s 00:06:12.819 sys 0m0.098s 00:06:12.819 07:52:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.819 07:52:18 -- common/autotest_common.sh@10 -- # set +x 00:06:12.819 ************************************ 00:06:12.819 END TEST unittest_iscsi 00:06:12.819 ************************************ 00:06:12.819 07:52:18 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:06:12.819 07:52:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.819 07:52:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.819 07:52:18 -- common/autotest_common.sh@10 -- # set +x 00:06:12.819 ************************************ 00:06:12.819 START TEST unittest_json 00:06:12.819 ************************************ 00:06:12.819 07:52:18 -- common/autotest_common.sh@1104 -- # unittest_json 00:06:12.819 07:52:18 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:06:12.819 00:06:12.819 00:06:12.819 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.819 http://cunit.sourceforge.net/ 00:06:12.819 00:06:12.819 00:06:12.819 Suite: json 00:06:12.819 Test: test_parse_literal ...passed 00:06:12.819 Test: test_parse_string_simple ...passed 00:06:12.819 Test: test_parse_string_control_chars ...passed 00:06:12.819 Test: test_parse_string_utf8 ...passed 00:06:12.819 Test: test_parse_string_escapes_twochar ...passed 00:06:12.819 Test: test_parse_string_escapes_unicode ...passed 00:06:12.819 Test: test_parse_number ...passed 00:06:12.819 Test: test_parse_array ...passed 00:06:12.819 Test: test_parse_object ...passed 00:06:12.819 Test: test_parse_nesting ...passed 00:06:12.819 Test: test_parse_comment ...passed 00:06:12.819 00:06:12.819 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.819 suites 1 1 n/a 0 0 00:06:12.819 tests 11 11 11 0 0 00:06:12.819 asserts 1516 1516 1516 0 n/a 00:06:12.819 00:06:12.819 Elapsed time = 0.000 seconds 00:06:12.819 07:52:18 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:06:12.819 00:06:12.819 00:06:12.819 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.819 http://cunit.sourceforge.net/ 00:06:12.819 00:06:12.819 00:06:12.819 Suite: json 00:06:12.819 Test: test_strequal ...passed 00:06:12.819 Test: test_num_to_uint16 ...passed 00:06:12.819 Test: test_num_to_int32 ...passed 00:06:12.819 Test: test_num_to_uint64 ...passed 00:06:12.819 Test: test_decode_object ...passed 00:06:12.819 Test: test_decode_array ...passed 00:06:12.819 Test: test_decode_bool ...passed 00:06:12.819 Test: test_decode_uint16 ...passed 00:06:12.819 Test: test_decode_int32 ...passed 00:06:12.819 Test: test_decode_uint32 ...passed 00:06:12.819 Test: test_decode_uint64 ...passed 00:06:12.819 Test: test_decode_string ...passed 00:06:12.819 Test: test_decode_uuid ...passed 00:06:12.819 Test: test_find ...passed 00:06:12.819 Test: test_find_array ...passed 00:06:12.819 Test: test_iterating ...passed 00:06:12.819 Test: test_free_object ...passed 00:06:12.819 00:06:12.819 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.819 suites 1 1 n/a 0 0 00:06:12.819 tests 17 17 17 0 0 00:06:12.819 asserts 236 236 236 0 n/a 00:06:12.819 00:06:12.819 Elapsed time = 0.000 seconds 00:06:12.819 07:52:18 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:06:12.819 00:06:12.819 00:06:12.819 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.819 http://cunit.sourceforge.net/ 00:06:12.819 00:06:12.819 00:06:12.819 Suite: json 00:06:12.819 Test: test_write_literal ...passed 00:06:12.819 Test: test_write_string_simple ...passed 00:06:12.819 Test: test_write_string_escapes ...passed 00:06:12.819 Test: test_write_string_utf16le ...passed 00:06:12.819 Test: test_write_number_int32 ...passed 00:06:12.819 Test: test_write_number_uint32 ...passed 00:06:12.819 Test: test_write_number_uint128 ...passed 00:06:12.819 Test: test_write_string_number_uint128 ...passed 00:06:12.819 Test: test_write_number_int64 ...passed 00:06:12.819 Test: test_write_number_uint64 ...passed 00:06:12.819 Test: test_write_number_double ...passed 00:06:12.819 Test: test_write_uuid ...passed 00:06:12.819 Test: test_write_array ...passed 00:06:12.819 Test: test_write_object ...passed 00:06:12.819 Test: test_write_nesting ...passed 00:06:12.819 Test: test_write_val ...passed 00:06:12.819 00:06:12.819 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.819 suites 1 1 n/a 0 0 00:06:12.819 tests 16 16 16 0 0 00:06:12.819 asserts 918 918 918 0 n/a 00:06:12.819 00:06:12.819 Elapsed time = 0.010 seconds 00:06:12.819 07:52:18 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:06:12.819 00:06:12.819 00:06:12.819 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.819 http://cunit.sourceforge.net/ 00:06:12.819 00:06:12.819 00:06:12.819 Suite: jsonrpc 00:06:12.819 Test: test_parse_request ...passed 00:06:12.819 Test: test_parse_request_streaming ...passed 00:06:12.819 00:06:12.819 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.819 suites 1 1 n/a 0 0 00:06:12.819 tests 2 2 2 0 0 00:06:12.819 asserts 289 289 289 0 n/a 00:06:12.819 00:06:12.819 Elapsed time = 0.000 seconds 00:06:13.078 00:06:13.078 real 0m0.106s 00:06:13.078 user 0m0.057s 00:06:13.078 sys 0m0.051s 00:06:13.078 07:52:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.078 07:52:18 -- common/autotest_common.sh@10 -- # set +x 00:06:13.078 ************************************ 00:06:13.078 END TEST unittest_json 00:06:13.078 ************************************ 00:06:13.078 07:52:18 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:06:13.078 07:52:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.078 07:52:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.078 07:52:18 -- common/autotest_common.sh@10 -- # set +x 00:06:13.078 ************************************ 00:06:13.078 START TEST unittest_rpc 00:06:13.078 ************************************ 00:06:13.078 07:52:18 -- common/autotest_common.sh@1104 -- # unittest_rpc 00:06:13.078 07:52:18 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:06:13.078 00:06:13.078 00:06:13.078 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.078 http://cunit.sourceforge.net/ 00:06:13.078 00:06:13.078 00:06:13.078 Suite: rpc 00:06:13.078 Test: test_jsonrpc_handler ...passed 00:06:13.078 Test: test_spdk_rpc_is_method_allowed ...passed 00:06:13.078 Test: test_rpc_get_methods ...passed 00:06:13.078 Test: test_rpc_spdk_get_version ...passed 00:06:13.078 Test: test_spdk_rpc_listen_close ...passed 00:06:13.078 00:06:13.078 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.078 suites 1 1 n/a 0 0 00:06:13.078 tests 5 5 5 0 0 00:06:13.078 asserts 20 20 20 0 n/a 00:06:13.078 00:06:13.078 Elapsed time = 0.000 seconds 00:06:13.078 [2024-07-13 07:52:18.709114] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:06:13.078 ************************************ 00:06:13.078 END TEST unittest_rpc 00:06:13.078 ************************************ 00:06:13.078 00:06:13.078 real 0m0.034s 00:06:13.078 user 0m0.012s 00:06:13.078 sys 0m0.023s 00:06:13.079 07:52:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.079 07:52:18 -- common/autotest_common.sh@10 -- # set +x 00:06:13.079 07:52:18 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:06:13.079 07:52:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.079 07:52:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.079 07:52:18 -- common/autotest_common.sh@10 -- # set +x 00:06:13.079 ************************************ 00:06:13.079 START TEST unittest_notify 00:06:13.079 ************************************ 00:06:13.079 07:52:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:06:13.079 00:06:13.079 00:06:13.079 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.079 http://cunit.sourceforge.net/ 00:06:13.079 00:06:13.079 00:06:13.079 Suite: app_suite 00:06:13.079 Test: notify ...passed 00:06:13.079 00:06:13.079 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.079 suites 1 1 n/a 0 0 00:06:13.079 tests 1 1 1 0 0 00:06:13.079 asserts 13 13 13 0 n/a 00:06:13.079 00:06:13.079 Elapsed time = 0.000 seconds 00:06:13.079 ************************************ 00:06:13.079 END TEST unittest_notify 00:06:13.079 ************************************ 00:06:13.079 00:06:13.079 real 0m0.031s 00:06:13.079 user 0m0.013s 00:06:13.079 sys 0m0.018s 00:06:13.079 07:52:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.079 07:52:18 -- common/autotest_common.sh@10 -- # set +x 00:06:13.079 07:52:18 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:06:13.079 07:52:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.079 07:52:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.079 07:52:18 -- common/autotest_common.sh@10 -- # set +x 00:06:13.079 ************************************ 00:06:13.079 START TEST unittest_nvme 00:06:13.079 ************************************ 00:06:13.079 07:52:18 -- common/autotest_common.sh@1104 -- # unittest_nvme 00:06:13.079 07:52:18 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:06:13.079 00:06:13.079 00:06:13.079 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.079 http://cunit.sourceforge.net/ 00:06:13.079 00:06:13.079 00:06:13.079 Suite: nvme 00:06:13.079 Test: test_opc_data_transfer ...passed 00:06:13.079 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:06:13.079 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:06:13.079 Test: test_trid_parse_and_compare ...[2024-07-13 07:52:18.875439] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:06:13.079 [2024-07-13 07:52:18.875693] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:13.079 [2024-07-13 07:52:18.875771] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:06:13.079 [2024-07-13 07:52:18.875810] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:13.079 [2024-07-13 07:52:18.875842] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:06:13.079 [2024-07-13 07:52:18.875925] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:13.079 passed 00:06:13.079 Test: test_trid_trtype_str ...passed 00:06:13.079 Test: test_trid_adrfam_str ...passed 00:06:13.079 Test: test_nvme_ctrlr_probe ...passed 00:06:13.079 Test: test_spdk_nvme_probe ...passed 00:06:13.079 Test: test_spdk_nvme_connect ...passed 00:06:13.079 Test: test_nvme_ctrlr_probe_internal ...passed 00:06:13.079 Test: test_nvme_init_controllers ...passed 00:06:13.079 Test: test_nvme_driver_init ...[2024-07-13 07:52:18.876185] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:06:13.079 [2024-07-13 07:52:18.876265] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:13.079 [2024-07-13 07:52:18.876306] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:06:13.079 [2024-07-13 07:52:18.876344] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:06:13.079 [2024-07-13 07:52:18.876383] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:06:13.079 [2024-07-13 07:52:18.876437] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:06:13.079 [2024-07-13 07:52:18.876596] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:13.079 [2024-07-13 07:52:18.876657] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:06:13.079 [2024-07-13 07:52:18.876805] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:06:13.079 [2024-07-13 07:52:18.876845] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:06:13.079 [2024-07-13 07:52:18.876908] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:06:13.079 [2024-07-13 07:52:18.876963] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:06:13.079 [2024-07-13 07:52:18.877002] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:13.339 passed 00:06:13.339 Test: test_spdk_nvme_detach ...passed 00:06:13.339 Test: test_nvme_completion_poll_cb ...passed 00:06:13.339 Test: test_nvme_user_copy_cmd_complete ...passed 00:06:13.339 Test: test_nvme_allocate_request_null ...passed 00:06:13.339 Test: test_nvme_allocate_request ...[2024-07-13 07:52:18.993076] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:06:13.339 [2024-07-13 07:52:18.993233] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:06:13.339 passed 00:06:13.339 Test: test_nvme_free_request ...passed 00:06:13.339 Test: test_nvme_allocate_request_user_copy ...passed 00:06:13.339 Test: test_nvme_robust_mutex_init_shared ...passed 00:06:13.339 Test: test_nvme_request_check_timeout ...passed 00:06:13.339 Test: test_nvme_wait_for_completion ...passed 00:06:13.339 Test: test_spdk_nvme_parse_func ...passed 00:06:13.339 Test: test_spdk_nvme_detach_async ...passed 00:06:13.339 Test: test_nvme_parse_addr ...passed 00:06:13.339 00:06:13.339 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.339 suites 1 1 n/a 0 0 00:06:13.339 tests 25 25 25 0 0 00:06:13.339 asserts 326 326 326 0 n/a 00:06:13.339 00:06:13.339 Elapsed time = 0.010 seconds 00:06:13.339 [2024-07-13 07:52:18.994112] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:06:13.339 07:52:19 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:06:13.339 00:06:13.339 00:06:13.339 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.339 http://cunit.sourceforge.net/ 00:06:13.339 00:06:13.339 00:06:13.339 Suite: nvme_ctrlr 00:06:13.339 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-13 07:52:19.027414] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.339 passed 00:06:13.339 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-13 07:52:19.029162] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.339 passed 00:06:13.339 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-13 07:52:19.030420] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.339 passed 00:06:13.339 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-13 07:52:19.031680] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.339 passed 00:06:13.339 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-13 07:52:19.033002] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.339 [2024-07-13 07:52:19.034183] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-13 07:52:19.035505] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-13 07:52:19.036708] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:13.339 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-13 07:52:19.039077] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.339 [2024-07-13 07:52:19.041578] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-13 07:52:19.042912] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:13.339 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-13 07:52:19.045428] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.339 [2024-07-13 07:52:19.046687] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-13 07:52:19.049140] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:13.339 Test: test_nvme_ctrlr_init_delay ...[2024-07-13 07:52:19.051687] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.339 passed 00:06:13.339 Test: test_alloc_io_qpair_rr_1 ...passed 00:06:13.339 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:06:13.339 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:06:13.339 Test: test_alloc_io_qpair_wrr_1 ...passed 00:06:13.339 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-13 07:52:19.053062] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.339 [2024-07-13 07:52:19.053155] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:06:13.339 [2024-07-13 07:52:19.053313] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:13.339 [2024-07-13 07:52:19.053374] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:13.339 [2024-07-13 07:52:19.053427] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:13.339 [2024-07-13 07:52:19.053630] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.339 passed 00:06:13.339 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-13 07:52:19.053713] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.339 [2024-07-13 07:52:19.053806] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:06:13.339 [2024-07-13 07:52:19.053979] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4846:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:06:13.339 passed 00:06:13.339 Test: test_nvme_ctrlr_fail ...passed 00:06:13.339 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:06:13.339 Test: test_nvme_ctrlr_set_supported_features ...passed 00:06:13.339 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:06:13.339 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-13 07:52:19.054112] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:06:13.339 [2024-07-13 07:52:19.054197] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4923:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:06:13.339 [2024-07-13 07:52:19.054259] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:06:13.339 [2024-07-13 07:52:19.054339] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:06:13.339 [2024-07-13 07:52:19.054613] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.598 passed 00:06:13.598 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:06:13.598 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:06:13.598 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:06:13.598 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-13 07:52:19.242236] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.598 passed 00:06:13.598 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-13 07:52:19.249438] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.598 passed 00:06:13.599 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-13 07:52:19.250723] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.599 [2024-07-13 07:52:19.250800] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:06:13.599 passed 00:06:13.599 Test: test_alloc_io_qpair_fail ...passed 00:06:13.599 Test: test_nvme_ctrlr_add_remove_process ...passed 00:06:13.599 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:06:13.599 Test: test_nvme_ctrlr_set_state ...passed 00:06:13.599 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-13 07:52:19.252007] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.599 [2024-07-13 07:52:19.252133] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:06:13.599 [2024-07-13 07:52:19.252266] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:06:13.599 [2024-07-13 07:52:19.252314] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.599 passed 00:06:13.599 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-13 07:52:19.273373] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.599 passed 00:06:13.599 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-13 07:52:19.312837] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.599 passed 00:06:13.599 Test: test_nvme_ctrlr_reset ...passed 00:06:13.599 Test: test_nvme_ctrlr_aer_callback ...[2024-07-13 07:52:19.314327] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.599 [2024-07-13 07:52:19.314657] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.599 passed 00:06:13.599 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-13 07:52:19.316060] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.599 passed 00:06:13.599 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:06:13.599 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:06:13.599 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-13 07:52:19.317736] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.599 passed 00:06:13.599 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:06:13.599 Test: test_nvme_ctrlr_ana_resize ...[2024-07-13 07:52:19.319128] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.599 passed 00:06:13.599 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:06:13.599 Test: test_nvme_transport_ctrlr_ready ...passed 00:06:13.599 Test: test_nvme_ctrlr_disable ...[2024-07-13 07:52:19.320610] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4016:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:06:13.599 [2024-07-13 07:52:19.320677] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:06:13.599 [2024-07-13 07:52:19.320737] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.599 passed 00:06:13.599 00:06:13.599 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.599 suites 1 1 n/a 0 0 00:06:13.599 tests 43 43 43 0 0 00:06:13.599 asserts 10418 10418 10418 0 n/a 00:06:13.599 00:06:13.599 Elapsed time = 0.250 seconds 00:06:13.599 07:52:19 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:06:13.599 00:06:13.599 00:06:13.599 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.599 http://cunit.sourceforge.net/ 00:06:13.599 00:06:13.599 00:06:13.599 Suite: nvme_ctrlr_cmd 00:06:13.599 Test: test_get_log_pages ...passed 00:06:13.599 Test: test_set_feature_cmd ...passed 00:06:13.599 Test: test_set_feature_ns_cmd ...passed 00:06:13.599 Test: test_get_feature_cmd ...passed 00:06:13.599 Test: test_get_feature_ns_cmd ...passed 00:06:13.599 Test: test_abort_cmd ...passed 00:06:13.599 Test: test_set_host_id_cmds ...passed 00:06:13.599 Test: test_io_cmd_raw_no_payload_build ...passed 00:06:13.599 Test: test_io_raw_cmd ...passed 00:06:13.599 Test: test_io_raw_cmd_with_md ...passed 00:06:13.599 Test: test_namespace_attach ...passed 00:06:13.599 Test: test_namespace_detach ...passed 00:06:13.599 Test: test_namespace_create ...passed 00:06:13.599 Test: test_namespace_delete ...passed 00:06:13.599 Test: test_doorbell_buffer_config ...passed 00:06:13.599 Test: test_format_nvme ...passed 00:06:13.599 Test: test_fw_commit ...passed 00:06:13.599 Test: test_fw_image_download ...passed 00:06:13.599 Test: test_sanitize ...passed 00:06:13.599 Test: test_directive ...passed 00:06:13.599 Test: test_nvme_request_add_abort ...passed 00:06:13.599 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:06:13.599 Test: test_nvme_ctrlr_cmd_identify ...passed 00:06:13.599 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:06:13.599 00:06:13.599 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.599 suites 1 1 n/a 0 0 00:06:13.599 tests 24 24 24 0 0 00:06:13.599 asserts 198 198 198 0 n/a 00:06:13.599 00:06:13.599 Elapsed time = 0.000 seconds 00:06:13.599 [2024-07-13 07:52:19.367705] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:06:13.599 07:52:19 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:06:13.599 00:06:13.599 00:06:13.599 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.599 http://cunit.sourceforge.net/ 00:06:13.599 00:06:13.599 00:06:13.599 Suite: nvme_ctrlr_cmd 00:06:13.599 Test: test_geometry_cmd ...passed 00:06:13.599 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:06:13.599 00:06:13.599 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.599 suites 1 1 n/a 0 0 00:06:13.599 tests 2 2 2 0 0 00:06:13.599 asserts 7 7 7 0 n/a 00:06:13.599 00:06:13.599 Elapsed time = 0.000 seconds 00:06:13.599 07:52:19 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:06:13.858 00:06:13.858 00:06:13.858 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.858 http://cunit.sourceforge.net/ 00:06:13.858 00:06:13.858 00:06:13.858 Suite: nvme 00:06:13.858 Test: test_nvme_ns_construct ...passed 00:06:13.858 Test: test_nvme_ns_uuid ...passed 00:06:13.858 Test: test_nvme_ns_csi ...passed 00:06:13.858 Test: test_nvme_ns_data ...passed 00:06:13.858 Test: test_nvme_ns_set_identify_data ...passed 00:06:13.858 Test: test_spdk_nvme_ns_get_values ...passed 00:06:13.858 Test: test_spdk_nvme_ns_is_active ...passed 00:06:13.858 Test: spdk_nvme_ns_supports ...passed 00:06:13.858 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:06:13.858 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:06:13.858 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:06:13.858 Test: test_nvme_ns_find_id_desc ...passed 00:06:13.858 00:06:13.858 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.858 suites 1 1 n/a 0 0 00:06:13.858 tests 12 12 12 0 0 00:06:13.858 asserts 83 83 83 0 n/a 00:06:13.858 00:06:13.858 Elapsed time = 0.000 seconds 00:06:13.858 07:52:19 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:06:13.858 00:06:13.858 00:06:13.858 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.858 http://cunit.sourceforge.net/ 00:06:13.858 00:06:13.858 00:06:13.858 Suite: nvme_ns_cmd 00:06:13.858 Test: split_test ...passed 00:06:13.858 Test: split_test2 ...passed 00:06:13.858 Test: split_test3 ...passed 00:06:13.858 Test: split_test4 ...passed 00:06:13.858 Test: test_nvme_ns_cmd_flush ...passed 00:06:13.858 Test: test_nvme_ns_cmd_dataset_management ...passed 00:06:13.858 Test: test_nvme_ns_cmd_copy ...passed 00:06:13.858 Test: test_io_flags ...passed 00:06:13.858 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:06:13.858 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:06:13.858 Test: test_nvme_ns_cmd_reservation_register ...[2024-07-13 07:52:19.445772] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:06:13.858 passed 00:06:13.858 Test: test_nvme_ns_cmd_reservation_release ...passed 00:06:13.858 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:06:13.858 Test: test_nvme_ns_cmd_reservation_report ...passed 00:06:13.858 Test: test_cmd_child_request ...passed 00:06:13.858 Test: test_nvme_ns_cmd_readv ...passed 00:06:13.858 Test: test_nvme_ns_cmd_read_with_md ...passed 00:06:13.858 Test: test_nvme_ns_cmd_writev ...passed 00:06:13.858 Test: test_nvme_ns_cmd_write_with_md ...passed 00:06:13.858 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:06:13.858 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:06:13.858 Test: test_nvme_ns_cmd_comparev ...[2024-07-13 07:52:19.446829] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:06:13.858 passed 00:06:13.858 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:06:13.858 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:06:13.858 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:06:13.858 Test: test_nvme_ns_cmd_setup_request ...passed 00:06:13.858 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:06:13.858 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:06:13.858 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:06:13.858 Test: test_nvme_ns_cmd_verify ...passed 00:06:13.858 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:06:13.858 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:06:13.858 00:06:13.858 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.858 suites 1 1 n/a 0 0 00:06:13.858 tests 32 32 32 0 0 00:06:13.858 asserts 550 550 550 0 n/a 00:06:13.858 00:06:13.858 Elapsed time = 0.010 seconds 00:06:13.858 [2024-07-13 07:52:19.448172] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:06:13.858 [2024-07-13 07:52:19.448296] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:06:13.858 07:52:19 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:06:13.858 00:06:13.858 00:06:13.858 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.858 http://cunit.sourceforge.net/ 00:06:13.858 00:06:13.858 00:06:13.858 Suite: nvme_ns_cmd 00:06:13.858 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:06:13.858 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:06:13.858 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:06:13.858 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:06:13.858 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:06:13.858 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:06:13.858 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:06:13.858 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:06:13.858 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:06:13.858 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:06:13.858 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:06:13.858 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:06:13.858 00:06:13.858 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.858 suites 1 1 n/a 0 0 00:06:13.858 tests 12 12 12 0 0 00:06:13.858 asserts 123 123 123 0 n/a 00:06:13.858 00:06:13.858 Elapsed time = 0.000 seconds 00:06:13.858 07:52:19 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:06:13.858 00:06:13.858 00:06:13.858 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.858 http://cunit.sourceforge.net/ 00:06:13.858 00:06:13.858 00:06:13.858 Suite: nvme_qpair 00:06:13.858 Test: test3 ...passed 00:06:13.858 Test: test_ctrlr_failed ...passed 00:06:13.858 Test: struct_packing ...passed 00:06:13.858 Test: test_nvme_qpair_process_completions ...passed 00:06:13.858 Test: test_nvme_completion_is_retry ...passed 00:06:13.858 Test: test_get_status_string ...passed 00:06:13.858 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:06:13.858 Test: test_nvme_qpair_submit_request ...passed 00:06:13.858 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:06:13.858 Test: test_nvme_qpair_manual_complete_request ...passed 00:06:13.858 Test: test_nvme_qpair_init_deinit ...passed 00:06:13.858 Test: test_nvme_get_sgl_print_info ...passed 00:06:13.858 00:06:13.858 [2024-07-13 07:52:19.496645] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:13.858 [2024-07-13 07:52:19.496925] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:13.859 [2024-07-13 07:52:19.496995] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:06:13.859 [2024-07-13 07:52:19.497094] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:06:13.859 [2024-07-13 07:52:19.497437] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:13.859 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.859 suites 1 1 n/a 0 0 00:06:13.859 tests 12 12 12 0 0 00:06:13.859 asserts 154 154 154 0 n/a 00:06:13.859 00:06:13.859 Elapsed time = 0.010 seconds 00:06:13.859 07:52:19 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:06:13.859 00:06:13.859 00:06:13.859 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.859 http://cunit.sourceforge.net/ 00:06:13.859 00:06:13.859 00:06:13.859 Suite: nvme_pcie 00:06:13.859 Test: test_prp_list_append ...[2024-07-13 07:52:19.526139] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:06:13.859 [2024-07-13 07:52:19.526393] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:06:13.859 [2024-07-13 07:52:19.526437] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:06:13.859 passed 00:06:13.859 Test: test_nvme_pcie_hotplug_monitor ...passed 00:06:13.859 Test: test_shadow_doorbell_update ...passed 00:06:13.859 Test: test_build_contig_hw_sgl_request ...passed[2024-07-13 07:52:19.526677] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:06:13.859 [2024-07-13 07:52:19.526761] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:06:13.859 00:06:13.859 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:06:13.859 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:06:13.859 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:06:13.859 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:06:13.859 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:06:13.859 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:06:13.859 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:06:13.859 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:06:13.859 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-07-13 07:52:19.527032] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:06:13.859 [2024-07-13 07:52:19.527150] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:06:13.859 [2024-07-13 07:52:19.527238] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:06:13.859 passed 00:06:13.859 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:06:13.859 00:06:13.859 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.859 suites 1 1 n/a 0 0 00:06:13.859 tests 14 14 14 0 0 00:06:13.859 asserts 235 235 235 0 n/a 00:06:13.859 00:06:13.859 Elapsed time = 0.000 seconds 00:06:13.859 [2024-07-13 07:52:19.527304] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:06:13.859 [2024-07-13 07:52:19.527358] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:06:13.859 07:52:19 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:06:13.859 00:06:13.859 00:06:13.859 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.859 http://cunit.sourceforge.net/ 00:06:13.859 00:06:13.859 00:06:13.859 Suite: nvme_ns_cmd 00:06:13.859 Test: nvme_poll_group_create_test ...passed 00:06:13.859 Test: nvme_poll_group_add_remove_test ...passed 00:06:13.859 Test: nvme_poll_group_process_completions ...passed 00:06:13.859 Test: nvme_poll_group_destroy_test ...passed 00:06:13.859 Test: nvme_poll_group_get_free_stats ...passed 00:06:13.859 00:06:13.859 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.859 suites 1 1 n/a 0 0 00:06:13.859 tests 5 5 5 0 0 00:06:13.859 asserts 75 75 75 0 n/a 00:06:13.859 00:06:13.859 Elapsed time = 0.000 seconds 00:06:13.859 07:52:19 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:06:13.859 00:06:13.859 00:06:13.859 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.859 http://cunit.sourceforge.net/ 00:06:13.859 00:06:13.859 00:06:13.859 Suite: nvme_quirks 00:06:13.859 Test: test_nvme_quirks_striping ...passed 00:06:13.859 00:06:13.859 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.859 suites 1 1 n/a 0 0 00:06:13.859 tests 1 1 1 0 0 00:06:13.859 asserts 5 5 5 0 n/a 00:06:13.859 00:06:13.859 Elapsed time = 0.000 seconds 00:06:13.859 07:52:19 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:06:13.859 00:06:13.859 00:06:13.859 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.859 http://cunit.sourceforge.net/ 00:06:13.859 00:06:13.859 00:06:13.859 Suite: nvme_tcp 00:06:13.859 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:06:13.859 Test: test_nvme_tcp_build_iovs ...passed 00:06:13.859 Test: test_nvme_tcp_build_sgl_request ...passed 00:06:13.859 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:06:13.859 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:06:13.859 Test: test_nvme_tcp_req_complete_safe ...passed 00:06:13.859 Test: test_nvme_tcp_req_get ...passed 00:06:13.859 Test: test_nvme_tcp_req_init ...passed 00:06:13.859 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:06:13.859 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:06:13.859 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:06:13.859 Test: test_nvme_tcp_alloc_reqs ...passed 00:06:13.859 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:06:13.859 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-13 07:52:19.612381] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffd1afc73d0, and the iovcnt=16, remaining_size=28672 00:06:13.859 [2024-07-13 07:52:19.612994] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd1afc90e0 is same with the state(6) to be set 00:06:13.859 [2024-07-13 07:52:19.613271] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd1afc8280 is same with the state(5) to be set 00:06:13.859 [2024-07-13 07:52:19.613346] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffd1afc8db0 00:06:13.859 [2024-07-13 07:52:19.613404] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:06:13.859 [2024-07-13 07:52:19.613509] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd1afc8740 is same with the state(5) to be set 00:06:13.859 [2024-07-13 07:52:19.613624] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:06:13.859 [2024-07-13 07:52:19.613698] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd1afc8740 is same with the state(5) to be set 00:06:13.859 [2024-07-13 07:52:19.613744] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:06:13.859 [2024-07-13 07:52:19.613774] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd1afc8740 is same with the state(5) to be set 00:06:13.859 [2024-07-13 07:52:19.613812] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd1afc8740 is same with the state(5) to be set 00:06:13.859 [2024-07-13 07:52:19.613843] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd1afc8740 is same with the state(5) to be set 00:06:13.859 passed 00:06:13.859 Test: test_nvme_tcp_qpair_connect_sock ...passed 00:06:13.859 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:06:13.859 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:06:13.859 Test: test_nvme_tcp_icresp_handle ...passed 00:06:13.859 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:06:13.859 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:06:13.859 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:06:13.859 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...passed 00:06:13.859 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-13 07:52:19.613903] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd1afc8740 is same with the state(5) to be set 00:06:13.859 [2024-07-13 07:52:19.613938] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd1afc8740 is same with the state(5) to be set 00:06:13.859 [2024-07-13 07:52:19.613989] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd1afc8740 is same with the state(5) to be set 00:06:13.859 [2024-07-13 07:52:19.614131] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:06:13.859 [2024-07-13 07:52:19.614182] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:06:13.859 [2024-07-13 07:52:19.614489] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:06:13.859 [2024-07-13 07:52:19.614593] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffd1afc88f0): PDU Sequence Error 00:06:13.859 [2024-07-13 07:52:19.614695] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:06:13.859 [2024-07-13 07:52:19.614730] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:06:13.859 [2024-07-13 07:52:19.614769] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd1afc8280 is same with the state(5) to be set 00:06:13.859 [2024-07-13 07:52:19.614806] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:06:13.859 [2024-07-13 07:52:19.614845] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd1afc8280 is same with the state(5) to be set 00:06:13.859 [2024-07-13 07:52:19.614894] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd1afc8280 is same with the state(0) to be set 00:06:13.859 [2024-07-13 07:52:19.614954] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffd1afc8db0): PDU Sequence Error 00:06:13.859 [2024-07-13 07:52:19.615045] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffd1afc7570 00:06:13.859 [2024-07-13 07:52:19.615183] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffd1afc6bf0, errno=0, rc=0 00:06:13.859 [2024-07-13 07:52:19.615245] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd1afc6bf0 is same with the state(5) to be set 00:06:13.860 [2024-07-13 07:52:19.615299] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd1afc6bf0 is same with the state(5) to be set 00:06:13.860 [2024-07-13 07:52:19.615359] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffd1afc6bf0 (0): Success 00:06:13.860 [2024-07-13 07:52:19.615402] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffd1afc6bf0 (0): Success 00:06:14.118 passed 00:06:14.118 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:06:14.118 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:06:14.118 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-13 07:52:19.678703] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:06:14.118 [2024-07-13 07:52:19.678833] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:14.118 [2024-07-13 07:52:19.679037] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:14.118 [2024-07-13 07:52:19.679070] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:14.118 [2024-07-13 07:52:19.679271] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:14.118 [2024-07-13 07:52:19.679318] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:14.118 [2024-07-13 07:52:19.679403] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:06:14.118 passed 00:06:14.118 Test: test_nvme_tcp_qpair_submit_request ...[2024-07-13 07:52:19.679699] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:14.118 [2024-07-13 07:52:19.679839] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23 00:06:14.118 [2024-07-13 07:52:19.679890] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:14.118 passed 00:06:14.118 00:06:14.118 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.118 suites 1 1 n/a 0 0 00:06:14.118 tests 27 27 27 0 0 00:06:14.118 asserts 624 624 624 0 n/a 00:06:14.118 00:06:14.118 Elapsed time = 0.070 seconds 00:06:14.118 [2024-07-13 07:52:19.679998] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024 00:06:14.118 [2024-07-13 07:52:19.680043] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:06:14.118 07:52:19 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:06:14.118 00:06:14.118 00:06:14.118 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.118 http://cunit.sourceforge.net/ 00:06:14.118 00:06:14.118 00:06:14.118 Suite: nvme_transport 00:06:14.118 Test: test_nvme_get_transport ...passed 00:06:14.118 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:06:14.118 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:06:14.118 Test: test_nvme_transport_poll_group_add_remove ...passed 00:06:14.118 Test: test_ctrlr_get_memory_domains ...passed 00:06:14.118 00:06:14.118 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.118 suites 1 1 n/a 0 0 00:06:14.118 tests 5 5 5 0 0 00:06:14.118 asserts 28 28 28 0 n/a 00:06:14.118 00:06:14.118 Elapsed time = 0.000 seconds 00:06:14.118 07:52:19 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:06:14.118 00:06:14.118 00:06:14.118 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.118 http://cunit.sourceforge.net/ 00:06:14.118 00:06:14.118 00:06:14.118 Suite: nvme_io_msg 00:06:14.118 Test: test_nvme_io_msg_send ...passed 00:06:14.118 Test: test_nvme_io_msg_process ...passed 00:06:14.118 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:06:14.118 00:06:14.118 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.118 suites 1 1 n/a 0 0 00:06:14.118 tests 3 3 3 0 0 00:06:14.118 asserts 56 56 56 0 n/a 00:06:14.118 00:06:14.118 Elapsed time = 0.000 seconds 00:06:14.118 07:52:19 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:06:14.118 00:06:14.118 00:06:14.118 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.118 http://cunit.sourceforge.net/ 00:06:14.118 00:06:14.118 00:06:14.118 Suite: nvme_pcie_common 00:06:14.118 Test: test_nvme_pcie_ctrlr_alloc_cmb ...passed 00:06:14.118 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:06:14.118 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:06:14.118 Test: test_nvme_pcie_ctrlr_connect_qpair ...passed 00:06:14.118 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:06:14.118 Test: test_nvme_pcie_poll_group_get_stats ...passed 00:06:14.118 00:06:14.118 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.119 suites 1 1 n/a 0 0 00:06:14.119 tests 6 6 6 0 0 00:06:14.119 asserts 148 148 148 0 n/a 00:06:14.119 00:06:14.119 Elapsed time = 0.000 seconds 00:06:14.119 [2024-07-13 07:52:19.754966] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:06:14.119 [2024-07-13 07:52:19.755595] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:06:14.119 [2024-07-13 07:52:19.755721] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:06:14.119 [2024-07-13 07:52:19.755758] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:06:14.119 [2024-07-13 07:52:19.756082] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:14.119 [2024-07-13 07:52:19.756126] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:14.119 07:52:19 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:06:14.119 00:06:14.119 00:06:14.119 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.119 http://cunit.sourceforge.net/ 00:06:14.119 00:06:14.119 00:06:14.119 Suite: nvme_fabric 00:06:14.119 Test: test_nvme_fabric_prop_set_cmd ...passed 00:06:14.119 Test: test_nvme_fabric_prop_get_cmd ...passed 00:06:14.119 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:06:14.119 Test: test_nvme_fabric_discover_probe ...passed 00:06:14.119 Test: test_nvme_fabric_qpair_connect ...passed 00:06:14.119 00:06:14.119 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.119 suites 1 1 n/a 0 0 00:06:14.119 tests 5 5 5 0 0 00:06:14.119 asserts 60 60 60 0 n/a 00:06:14.119 00:06:14.119 Elapsed time = 0.000 seconds 00:06:14.119 [2024-07-13 07:52:19.780824] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:06:14.119 07:52:19 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:06:14.119 00:06:14.119 00:06:14.119 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.119 http://cunit.sourceforge.net/ 00:06:14.119 00:06:14.119 00:06:14.119 Suite: nvme_opal 00:06:14.119 Test: test_opal_nvme_security_recv_send_done ...passed 00:06:14.119 Test: test_opal_add_short_atom_header ...passed 00:06:14.119 00:06:14.119 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.119 suites 1 1 n/a 0 0 00:06:14.119 tests 2 2 2 0 0 00:06:14.119 asserts 22 22 22 0 n/a 00:06:14.119 00:06:14.119 Elapsed time = 0.000 seconds 00:06:14.119 [2024-07-13 07:52:19.805149] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:06:14.119 ************************************ 00:06:14.119 END TEST unittest_nvme 00:06:14.119 ************************************ 00:06:14.119 00:06:14.119 real 0m0.954s 00:06:14.119 user 0m0.427s 00:06:14.119 sys 0m0.378s 00:06:14.119 07:52:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.119 07:52:19 -- common/autotest_common.sh@10 -- # set +x 00:06:14.119 07:52:19 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:06:14.119 07:52:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:14.119 07:52:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:14.119 07:52:19 -- common/autotest_common.sh@10 -- # set +x 00:06:14.119 ************************************ 00:06:14.119 START TEST unittest_log 00:06:14.119 ************************************ 00:06:14.119 07:52:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:06:14.119 00:06:14.119 00:06:14.119 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.119 http://cunit.sourceforge.net/ 00:06:14.119 00:06:14.119 00:06:14.119 Suite: log 00:06:14.119 Test: log_test ...passed 00:06:14.119 Test: deprecation ...[2024-07-13 07:52:19.889035] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:06:14.119 [2024-07-13 07:52:19.889251] log_ut.c: 55:log_test: *DEBUG*: log test 00:06:14.119 log dump test: 00:06:14.119 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:06:14.119 spdk dump test: 00:06:14.119 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:06:14.119 spdk dump test: 00:06:14.119 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:06:14.119 00000010 65 20 63 68 61 72 73 e chars 00:06:15.493 passed 00:06:15.493 00:06:15.493 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.493 suites 1 1 n/a 0 0 00:06:15.493 tests 2 2 2 0 0 00:06:15.493 asserts 73 73 73 0 n/a 00:06:15.493 00:06:15.493 Elapsed time = 0.000 seconds 00:06:15.493 ************************************ 00:06:15.493 END TEST unittest_log 00:06:15.493 ************************************ 00:06:15.493 00:06:15.493 real 0m1.037s 00:06:15.493 user 0m0.018s 00:06:15.493 sys 0m0.020s 00:06:15.493 07:52:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.493 07:52:20 -- common/autotest_common.sh@10 -- # set +x 00:06:15.493 07:52:20 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:06:15.493 07:52:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.493 07:52:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.493 07:52:20 -- common/autotest_common.sh@10 -- # set +x 00:06:15.493 ************************************ 00:06:15.493 START TEST unittest_lvol 00:06:15.493 ************************************ 00:06:15.493 07:52:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:06:15.493 00:06:15.493 00:06:15.493 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.493 http://cunit.sourceforge.net/ 00:06:15.493 00:06:15.493 00:06:15.493 Suite: lvol 00:06:15.493 Test: lvs_init_unload_success ...passed 00:06:15.493 Test: lvs_init_destroy_success ...passed 00:06:15.493 Test: lvs_init_opts_success ...passed 00:06:15.493 Test: lvs_unload_lvs_is_null_fail ...passed 00:06:15.493 Test: lvs_names ...passed 00:06:15.493 Test: lvol_create_destroy_success ...passed 00:06:15.493 Test: lvol_create_fail ...passed 00:06:15.493 Test: lvol_destroy_fail ...passed 00:06:15.493 Test: lvol_close ...passed 00:06:15.493 Test: lvol_resize ...passed 00:06:15.493 Test: lvol_set_read_only ...passed 00:06:15.493 Test: test_lvs_load ...[2024-07-13 07:52:20.982986] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:06:15.493 [2024-07-13 07:52:20.983325] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:06:15.493 [2024-07-13 07:52:20.983407] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:06:15.493 [2024-07-13 07:52:20.983477] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:06:15.493 [2024-07-13 07:52:20.983507] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:06:15.493 [2024-07-13 07:52:20.983618] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:06:15.493 [2024-07-13 07:52:20.983842] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:06:15.493 [2024-07-13 07:52:20.983924] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:06:15.493 [2024-07-13 07:52:20.984077] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:06:15.493 [2024-07-13 07:52:20.984197] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:06:15.493 [2024-07-13 07:52:20.984231] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:06:15.493 passed 00:06:15.493 Test: lvols_load ...passed 00:06:15.493 Test: lvol_open ...passed 00:06:15.493 Test: lvol_snapshot ...[2024-07-13 07:52:20.984573] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:06:15.493 [2024-07-13 07:52:20.984602] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:06:15.493 [2024-07-13 07:52:20.984704] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:06:15.493 [2024-07-13 07:52:20.984769] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:06:15.493 passed 00:06:15.493 Test: lvol_snapshot_fail ...passed 00:06:15.493 Test: lvol_clone ...passed 00:06:15.493 Test: lvol_clone_fail ...passed 00:06:15.493 Test: lvol_iter_clones ...passed 00:06:15.493 Test: lvol_refcnt ...passed 00:06:15.493 Test: lvol_names ...passed 00:06:15.493 Test: lvol_create_thin_provisioned ...passed 00:06:15.493 Test: lvol_rename ...passed 00:06:15.493 Test: lvs_rename ...passed 00:06:15.493 Test: lvol_inflate ...passed 00:06:15.493 Test: lvol_decouple_parent ...passed 00:06:15.493 Test: lvol_get_xattr ...passed 00:06:15.493 Test: lvol_esnap_reload ...passed 00:06:15.493 Test: lvol_esnap_create_bad_args ...passed 00:06:15.493 Test: lvol_esnap_create_delete ...[2024-07-13 07:52:20.985129] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:06:15.493 [2024-07-13 07:52:20.985492] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:06:15.493 [2024-07-13 07:52:20.985774] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol d14284e5-a487-4c7f-a5ae-375580cdf744 because it is still open 00:06:15.493 [2024-07-13 07:52:20.985899] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:06:15.493 [2024-07-13 07:52:20.985981] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:15.493 [2024-07-13 07:52:20.986101] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:06:15.493 [2024-07-13 07:52:20.986319] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:15.493 [2024-07-13 07:52:20.986384] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:06:15.493 [2024-07-13 07:52:20.986511] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:06:15.493 [2024-07-13 07:52:20.986639] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:06:15.493 [2024-07-13 07:52:20.986762] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:06:15.493 [2024-07-13 07:52:20.987027] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:06:15.493 [2024-07-13 07:52:20.987056] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:06:15.493 [2024-07-13 07:52:20.987102] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:06:15.493 [2024-07-13 07:52:20.987220] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:15.493 [2024-07-13 07:52:20.987322] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:06:15.493 passed 00:06:15.493 Test: lvol_esnap_load_esnaps ...passed 00:06:15.493 Test: lvol_esnap_missing ...passed 00:06:15.493 Test: lvol_esnap_hotplug ... 00:06:15.493 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:06:15.493 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:06:15.493 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:06:15.493 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:06:15.493 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:06:15.493 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:06:15.493 [2024-07-13 07:52:20.987763] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:06:15.493 [2024-07-13 07:52:20.987936] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:06:15.493 [2024-07-13 07:52:20.987985] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:06:15.493 [2024-07-13 07:52:20.988427] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol dee40ba4-b794-4768-9646-8338957ce4f3: failed to create esnap bs_dev: error -12 00:06:15.493 [2024-07-13 07:52:20.988673] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol d93ea5d7-82e6-449d-bacd-ed590892f9a4: failed to create esnap bs_dev: error -12 00:06:15.493 [2024-07-13 07:52:20.988804] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 3e944011-5fc8-43c4-b732-9a0b699fd8b6: failed to create esnap bs_dev: error -12 00:06:15.493 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:06:15.494 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:06:15.494 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:06:15.494 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:06:15.494 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:06:15.494 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:06:15.494 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:06:15.494 passed 00:06:15.494 Test: lvol_get_by ...passed 00:06:15.494 00:06:15.494 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.494 suites 1 1 n/a 0 0 00:06:15.494 tests 34 34 34 0 0 00:06:15.494 asserts 1439 1439 1439 0 n/a 00:06:15.494 00:06:15.494 Elapsed time = 0.020 seconds 00:06:15.494 ************************************ 00:06:15.494 END TEST unittest_lvol 00:06:15.494 ************************************ 00:06:15.494 00:06:15.494 real 0m0.039s 00:06:15.494 user 0m0.020s 00:06:15.494 sys 0m0.019s 00:06:15.494 07:52:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.494 07:52:21 -- common/autotest_common.sh@10 -- # set +x 00:06:15.494 07:52:21 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:15.494 07:52:21 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:06:15.494 07:52:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.494 07:52:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.494 07:52:21 -- common/autotest_common.sh@10 -- # set +x 00:06:15.494 ************************************ 00:06:15.494 START TEST unittest_nvme_rdma 00:06:15.494 ************************************ 00:06:15.494 07:52:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:06:15.494 00:06:15.494 00:06:15.494 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.494 http://cunit.sourceforge.net/ 00:06:15.494 00:06:15.494 00:06:15.494 Suite: nvme_rdma 00:06:15.494 Test: test_nvme_rdma_build_sgl_request ...passed 00:06:15.494 Test: test_nvme_rdma_build_sgl_inline_request ...[2024-07-13 07:52:21.077040] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:06:15.494 [2024-07-13 07:52:21.077296] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:06:15.494 [2024-07-13 07:52:21.077377] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:06:15.494 passed 00:06:15.494 Test: test_nvme_rdma_build_contig_request ...passed 00:06:15.494 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:06:15.494 Test: test_nvme_rdma_create_reqs ...passed 00:06:15.494 Test: test_nvme_rdma_create_rsps ...passed 00:06:15.494 Test: test_nvme_rdma_ctrlr_create_qpair ...passed 00:06:15.494 Test: test_nvme_rdma_poller_create ...passed 00:06:15.494 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:06:15.494 Test: test_nvme_rdma_ctrlr_construct ...[2024-07-13 07:52:21.077680] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:06:15.494 [2024-07-13 07:52:21.077810] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:06:15.494 [2024-07-13 07:52:21.078054] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:06:15.494 [2024-07-13 07:52:21.078159] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:06:15.494 [2024-07-13 07:52:21.078212] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:15.494 [2024-07-13 07:52:21.078366] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:06:15.494 passed 00:06:15.494 Test: test_nvme_rdma_req_put_and_get ...passed 00:06:15.494 Test: test_nvme_rdma_req_init ...passed 00:06:15.494 Test: test_nvme_rdma_validate_cm_event ...passed 00:06:15.494 Test: test_nvme_rdma_qpair_init ...passed 00:06:15.494 Test: test_nvme_rdma_qpair_submit_request ...passed 00:06:15.494 Test: test_nvme_rdma_memory_domain ...passed 00:06:15.494 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:06:15.494 Test: test_rdma_get_memory_translation ...[2024-07-13 07:52:21.078656] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:06:15.494 [2024-07-13 07:52:21.078700] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:06:15.494 [2024-07-13 07:52:21.078792] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:06:15.494 passed 00:06:15.494 Test: test_get_rdma_qpair_from_wc ...passed 00:06:15.494 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:06:15.494 Test: test_nvme_rdma_poll_group_get_stats ...[2024-07-13 07:52:21.078848] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:06:15.494 [2024-07-13 07:52:21.078908] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:06:15.494 passed 00:06:15.494 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-13 07:52:21.079022] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:15.494 [2024-07-13 07:52:21.079068] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:15.494 [2024-07-13 07:52:21.079210] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:06:15.494 [2024-07-13 07:52:21.079264] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:06:15.494 [2024-07-13 07:52:21.079315] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffe9b1ea1b0 on poll group 0x60b0000001a0 00:06:15.494 passed 00:06:15.494 00:06:15.494 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.494 suites 1 1 n/a 0 0 00:06:15.494 tests 22 22 22 0 0 00:06:15.494 asserts 412 412 412 0 n/a 00:06:15.494 00:06:15.494 Elapsed time = 0.000 seconds 00:06:15.494 [2024-07-13 07:52:21.079392] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:06:15.494 [2024-07-13 07:52:21.079447] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:06:15.494 [2024-07-13 07:52:21.079514] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffe9b1ea1b0 on poll group 0x60b0000001a0 00:06:15.494 [2024-07-13 07:52:21.079591] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:06:15.494 ************************************ 00:06:15.494 END TEST unittest_nvme_rdma 00:06:15.494 ************************************ 00:06:15.494 00:06:15.494 real 0m0.031s 00:06:15.494 user 0m0.015s 00:06:15.494 sys 0m0.016s 00:06:15.494 07:52:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.494 07:52:21 -- common/autotest_common.sh@10 -- # set +x 00:06:15.494 07:52:21 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:06:15.494 07:52:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.494 07:52:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.494 07:52:21 -- common/autotest_common.sh@10 -- # set +x 00:06:15.494 ************************************ 00:06:15.494 START TEST unittest_nvmf_transport 00:06:15.494 ************************************ 00:06:15.494 07:52:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:06:15.494 00:06:15.494 00:06:15.494 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.494 http://cunit.sourceforge.net/ 00:06:15.494 00:06:15.494 00:06:15.494 Suite: nvmf 00:06:15.494 Test: test_spdk_nvmf_transport_create ...[2024-07-13 07:52:21.155286] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:06:15.494 [2024-07-13 07:52:21.155564] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:06:15.494 passed 00:06:15.494 Test: test_nvmf_transport_poll_group_create ...passed 00:06:15.494 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-13 07:52:21.155601] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:06:15.494 [2024-07-13 07:52:21.155716] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:06:15.494 [2024-07-13 07:52:21.155866] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:06:15.494 passed 00:06:15.494 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:06:15.494 00:06:15.494 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.494 suites 1 1 n/a 0 0 00:06:15.494 tests 4 4 4 0 0 00:06:15.494 asserts 49 49 49 0 n/a 00:06:15.494 00:06:15.494 Elapsed time = 0.000 seconds 00:06:15.494 [2024-07-13 07:52:21.155957] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:06:15.494 [2024-07-13 07:52:21.155989] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:06:15.494 ************************************ 00:06:15.494 END TEST unittest_nvmf_transport 00:06:15.494 ************************************ 00:06:15.494 00:06:15.494 real 0m0.032s 00:06:15.494 user 0m0.013s 00:06:15.494 sys 0m0.020s 00:06:15.494 07:52:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.494 07:52:21 -- common/autotest_common.sh@10 -- # set +x 00:06:15.494 07:52:21 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:06:15.494 07:52:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.494 07:52:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.494 07:52:21 -- common/autotest_common.sh@10 -- # set +x 00:06:15.494 ************************************ 00:06:15.494 START TEST unittest_rdma 00:06:15.494 ************************************ 00:06:15.494 07:52:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:06:15.494 00:06:15.494 00:06:15.494 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.494 http://cunit.sourceforge.net/ 00:06:15.494 00:06:15.494 00:06:15.494 Suite: rdma_common 00:06:15.494 Test: test_spdk_rdma_pd ...[2024-07-13 07:52:21.232132] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:06:15.495 [2024-07-13 07:52:21.232396] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:06:15.495 passed 00:06:15.495 00:06:15.495 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.495 suites 1 1 n/a 0 0 00:06:15.495 tests 1 1 1 0 0 00:06:15.495 asserts 31 31 31 0 n/a 00:06:15.495 00:06:15.495 Elapsed time = 0.000 seconds 00:06:15.495 ************************************ 00:06:15.495 END TEST unittest_rdma 00:06:15.495 ************************************ 00:06:15.495 00:06:15.495 real 0m0.026s 00:06:15.495 user 0m0.014s 00:06:15.495 sys 0m0.012s 00:06:15.495 07:52:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.495 07:52:21 -- common/autotest_common.sh@10 -- # set +x 00:06:15.495 07:52:21 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:15.495 07:52:21 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:06:15.495 07:52:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.495 07:52:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.495 07:52:21 -- common/autotest_common.sh@10 -- # set +x 00:06:15.495 ************************************ 00:06:15.495 START TEST unittest_nvme_cuse 00:06:15.495 ************************************ 00:06:15.495 07:52:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:06:15.754 00:06:15.754 00:06:15.754 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.754 http://cunit.sourceforge.net/ 00:06:15.754 00:06:15.754 00:06:15.754 Suite: nvme_cuse 00:06:15.754 Test: test_cuse_nvme_submit_io_read_write ...passed 00:06:15.754 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:06:15.754 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:06:15.754 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:06:15.754 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:06:15.754 Test: test_cuse_nvme_submit_io ...passed 00:06:15.754 Test: test_cuse_nvme_reset ...passed 00:06:15.754 Test: test_nvme_cuse_stop ...passed 00:06:15.754 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:06:15.754 00:06:15.754 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.754 suites 1 1 n/a 0 0 00:06:15.754 tests 9 9 9 0 0 00:06:15.754 asserts 121 121 121 0 n/a 00:06:15.754 00:06:15.754 Elapsed time = 0.010 seconds 00:06:15.754 [2024-07-13 07:52:21.316765] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:06:15.754 [2024-07-13 07:52:21.316971] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:06:15.754 00:06:15.754 real 0m0.029s 00:06:15.754 user 0m0.020s 00:06:15.754 sys 0m0.010s 00:06:15.754 07:52:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.754 07:52:21 -- common/autotest_common.sh@10 -- # set +x 00:06:15.754 ************************************ 00:06:15.755 END TEST unittest_nvme_cuse 00:06:15.755 ************************************ 00:06:15.755 07:52:21 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:06:15.755 07:52:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.755 07:52:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.755 07:52:21 -- common/autotest_common.sh@10 -- # set +x 00:06:15.755 ************************************ 00:06:15.755 START TEST unittest_nvmf 00:06:15.755 ************************************ 00:06:15.755 07:52:21 -- common/autotest_common.sh@1104 -- # unittest_nvmf 00:06:15.755 07:52:21 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:06:15.755 00:06:15.755 00:06:15.755 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.755 http://cunit.sourceforge.net/ 00:06:15.755 00:06:15.755 00:06:15.755 Suite: nvmf 00:06:15.755 Test: test_get_log_page ...passed 00:06:15.755 Test: test_process_fabrics_cmd ...passed 00:06:15.755 Test: test_connect ...[2024-07-13 07:52:21.397500] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:06:15.755 [2024-07-13 07:52:21.398083] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:06:15.755 [2024-07-13 07:52:21.398177] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:06:15.755 [2024-07-13 07:52:21.398219] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:06:15.755 [2024-07-13 07:52:21.398252] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:06:15.755 [2024-07-13 07:52:21.398330] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:06:15.755 [2024-07-13 07:52:21.398366] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:06:15.755 [2024-07-13 07:52:21.398496] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:06:15.755 [2024-07-13 07:52:21.398535] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:06:15.755 [2024-07-13 07:52:21.398599] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:06:15.755 [2024-07-13 07:52:21.398649] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:06:15.755 [2024-07-13 07:52:21.398755] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:06:15.755 [2024-07-13 07:52:21.398795] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:06:15.755 [2024-07-13 07:52:21.398860] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:06:15.755 [2024-07-13 07:52:21.398908] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:06:15.755 [2024-07-13 07:52:21.398972] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:06:15.755 passed 00:06:15.755 Test: test_get_ns_id_desc_list ...passed 00:06:15.755 Test: test_identify_ns ...[2024-07-13 07:52:21.399061] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:06:15.755 [2024-07-13 07:52:21.399251] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:15.755 [2024-07-13 07:52:21.399434] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:06:15.755 passed 00:06:15.755 Test: test_identify_ns_iocs_specific ...passed 00:06:15.755 Test: test_reservation_write_exclusive ...passed 00:06:15.755 Test: test_reservation_exclusive_access ...passed 00:06:15.755 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:06:15.755 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:06:15.755 Test: test_reservation_notification_log_page ...passed 00:06:15.755 Test: test_get_dif_ctx ...passed 00:06:15.755 Test: test_set_get_features ...[2024-07-13 07:52:21.399581] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:06:15.755 [2024-07-13 07:52:21.399701] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:15.755 [2024-07-13 07:52:21.399876] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:15.755 [2024-07-13 07:52:21.400494] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:06:15.755 [2024-07-13 07:52:21.400527] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:06:15.755 [2024-07-13 07:52:21.400563] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:06:15.755 passed 00:06:15.755 Test: test_identify_ctrlr ...passed 00:06:15.755 Test: test_identify_ctrlr_iocs_specific ...passed 00:06:15.755 Test: test_custom_admin_cmd ...passed 00:06:15.755 Test: test_fused_compare_and_write ...passed 00:06:15.755 Test: test_multi_async_event_reqs ...passed 00:06:15.755 Test: test_get_ana_log_page_one_ns_per_anagrp ...[2024-07-13 07:52:21.400609] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:06:15.755 [2024-07-13 07:52:21.400938] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:06:15.755 [2024-07-13 07:52:21.400968] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:06:15.755 [2024-07-13 07:52:21.401002] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:06:15.755 passed 00:06:15.755 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:06:15.755 Test: test_multi_async_events ...passed 00:06:15.755 Test: test_rae ...passed 00:06:15.755 Test: test_nvmf_ctrlr_create_destruct ...passed 00:06:15.755 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:06:15.755 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:06:15.755 Test: test_zcopy_read ...passed 00:06:15.755 Test: test_zcopy_write ...[2024-07-13 07:52:21.401272] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:06:15.755 passed 00:06:15.755 Test: test_nvmf_property_set ...passed 00:06:15.755 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed 00:06:15.755 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed 00:06:15.755 00:06:15.755 [2024-07-13 07:52:21.401417] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:06:15.755 [2024-07-13 07:52:21.401494] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:06:15.755 [2024-07-13 07:52:21.401539] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:06:15.755 [2024-07-13 07:52:21.401572] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:06:15.755 [2024-07-13 07:52:21.401599] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:06:15.755 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.755 suites 1 1 n/a 0 0 00:06:15.755 tests 30 30 30 0 0 00:06:15.755 asserts 885 885 885 0 n/a 00:06:15.755 00:06:15.755 Elapsed time = 0.000 seconds 00:06:15.755 07:52:21 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:06:15.755 00:06:15.755 00:06:15.755 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.755 http://cunit.sourceforge.net/ 00:06:15.755 00:06:15.755 00:06:15.755 Suite: nvmf 00:06:15.755 Test: test_get_rw_params ...passed 00:06:15.755 Test: test_lba_in_range ...passed 00:06:15.755 Test: test_get_dif_ctx ...passed 00:06:15.755 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:06:15.755 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...passed 00:06:15.755 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:06:15.755 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-13 07:52:21.425926] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:06:15.755 [2024-07-13 07:52:21.426127] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:06:15.755 [2024-07-13 07:52:21.426204] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:06:15.755 [2024-07-13 07:52:21.426265] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:06:15.755 [2024-07-13 07:52:21.426349] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:06:15.755 [2024-07-13 07:52:21.426480] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:06:15.755 [2024-07-13 07:52:21.426518] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:06:15.755 [2024-07-13 07:52:21.426574] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:06:15.755 [2024-07-13 07:52:21.426621] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:06:15.755 passed 00:06:15.755 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:06:15.755 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:06:15.755 00:06:15.755 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.755 suites 1 1 n/a 0 0 00:06:15.755 tests 9 9 9 0 0 00:06:15.755 asserts 157 157 157 0 n/a 00:06:15.755 00:06:15.755 Elapsed time = 0.000 seconds 00:06:15.755 07:52:21 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:06:15.755 00:06:15.755 00:06:15.755 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.755 http://cunit.sourceforge.net/ 00:06:15.755 00:06:15.755 00:06:15.755 Suite: nvmf 00:06:15.755 Test: test_discovery_log ...passed 00:06:15.755 Test: test_discovery_log_with_filters ...passed 00:06:15.755 00:06:15.755 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.755 suites 1 1 n/a 0 0 00:06:15.755 tests 2 2 2 0 0 00:06:15.755 asserts 238 238 238 0 n/a 00:06:15.755 00:06:15.755 Elapsed time = 0.000 seconds 00:06:15.755 07:52:21 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:06:15.755 00:06:15.755 00:06:15.755 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.755 http://cunit.sourceforge.net/ 00:06:15.755 00:06:15.755 00:06:15.755 Suite: nvmf 00:06:15.756 Test: nvmf_test_create_subsystem ...[2024-07-13 07:52:21.481827] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:06:15.756 [2024-07-13 07:52:21.482079] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:06:15.756 [2024-07-13 07:52:21.482168] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:06:15.756 [2024-07-13 07:52:21.482220] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:06:15.756 [2024-07-13 07:52:21.482258] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:06:15.756 [2024-07-13 07:52:21.482299] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:06:15.756 [2024-07-13 07:52:21.482390] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:06:15.756 passed 00:06:15.756 Test: test_spdk_nvmf_subsystem_add_ns ...passed 00:06:15.756 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:06:15.756 Test: test_reservation_register ...[2024-07-13 07:52:21.482731] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:06:15.756 [2024-07-13 07:52:21.482837] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:06:15.756 [2024-07-13 07:52:21.482900] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:06:15.756 [2024-07-13 07:52:21.482946] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:06:15.756 [2024-07-13 07:52:21.483248] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:06:15.756 [2024-07-13 07:52:21.483420] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:06:15.756 [2024-07-13 07:52:21.483873] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:15.756 passed 00:06:15.756 Test: test_reservation_register_with_ptpl ...[2024-07-13 07:52:21.484000] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:06:15.756 passed 00:06:15.756 Test: test_reservation_acquire_preempt_1 ...[2024-07-13 07:52:21.485309] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:15.756 passed 00:06:15.756 Test: test_reservation_acquire_release_with_ptpl ...passed 00:06:15.756 Test: test_reservation_release ...passed 00:06:15.756 Test: test_reservation_unregister_notification ...[2024-07-13 07:52:21.487350] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:15.756 [2024-07-13 07:52:21.487675] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:15.756 passed 00:06:15.756 Test: test_reservation_release_notification ...passed 00:06:15.756 Test: test_reservation_release_notification_write_exclusive ...passed 00:06:15.756 Test: test_reservation_clear_notification ...passed 00:06:15.756 Test: test_reservation_preempt_notification ...passed 00:06:15.756 Test: test_spdk_nvmf_ns_event ...passed 00:06:15.756 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:06:15.756 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:06:15.756 Test: test_spdk_nvmf_subsystem_add_host ...passed 00:06:15.756 Test: test_nvmf_ns_reservation_report ...passed 00:06:15.756 Test: test_nvmf_nqn_is_valid ...passed 00:06:15.756 Test: test_nvmf_ns_reservation_restore ...passed 00:06:15.756 Test: test_nvmf_subsystem_state_change ...passed 00:06:15.756 Test: test_nvmf_reservation_custom_ops ...passed 00:06:15.756 00:06:15.756 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.756 suites 1 1 n/a 0 0 00:06:15.756 tests 22 22 22 0 0 00:06:15.756 asserts 407 407 407 0 n/a 00:06:15.756 00:06:15.756 Elapsed time = 0.010 seconds 00:06:15.756 [2024-07-13 07:52:21.488033] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:15.756 [2024-07-13 07:52:21.488267] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:15.756 [2024-07-13 07:52:21.488518] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:15.756 [2024-07-13 07:52:21.488787] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:15.756 [2024-07-13 07:52:21.489491] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:06:15.756 [2024-07-13 07:52:21.489579] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:06:15.756 [2024-07-13 07:52:21.489763] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:06:15.756 [2024-07-13 07:52:21.489888] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:06:15.756 [2024-07-13 07:52:21.489958] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:a61eddae-6684-457a-a438-7faaa225106": uuid is not the correct length 00:06:15.756 [2024-07-13 07:52:21.490022] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:06:15.756 [2024-07-13 07:52:21.490247] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:06:15.756 07:52:21 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:06:15.756 00:06:15.756 00:06:15.756 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.756 http://cunit.sourceforge.net/ 00:06:15.756 00:06:15.756 00:06:15.756 Suite: nvmf 00:06:15.756 Test: test_nvmf_tcp_create ...passed 00:06:15.756 Test: test_nvmf_tcp_destroy ...[2024-07-13 07:52:21.549829] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:06:16.015 passed 00:06:16.015 Test: test_nvmf_tcp_poll_group_create ...passed 00:06:16.015 Test: test_nvmf_tcp_send_c2h_data ...passed 00:06:16.015 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:06:16.015 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:06:16.015 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:06:16.015 Test: test_nvmf_tcp_send_c2h_term_req ...passed 00:06:16.015 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:06:16.015 Test: test_nvmf_tcp_icreq_handle ...passed 00:06:16.015 Test: test_nvmf_tcp_check_xfer_type ...passed 00:06:16.015 Test: test_nvmf_tcp_invalid_sgl ...[2024-07-13 07:52:21.679587] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.015 [2024-07-13 07:52:21.679695] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd86af190 is same with the state(5) to be set 00:06:16.015 [2024-07-13 07:52:21.679789] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd86af190 is same with the state(5) to be set 00:06:16.015 [2024-07-13 07:52:21.679827] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.015 [2024-07-13 07:52:21.679853] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd86af190 is same with the state(5) to be set 00:06:16.015 [2024-07-13 07:52:21.679929] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:06:16.015 [2024-07-13 07:52:21.680015] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.016 [2024-07-13 07:52:21.680067] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd86af190 is same with the state(5) to be set 00:06:16.016 [2024-07-13 07:52:21.680109] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:06:16.016 [2024-07-13 07:52:21.680156] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd86af190 is same with the state(5) to be set 00:06:16.016 [2024-07-13 07:52:21.680203] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.016 [2024-07-13 07:52:21.680239] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd86af190 is same with the state(5) to be set 00:06:16.016 [2024-07-13 07:52:21.680267] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:06:16.016 [2024-07-13 07:52:21.680329] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd86af190 is same with the state(5) to be set 00:06:16.016 [2024-07-13 07:52:21.680395] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2484:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:06:16.016 [2024-07-13 07:52:21.680430] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.016 passed 00:06:16.016 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-13 07:52:21.680699] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd86af190 is same with the state(5) to be set 00:06:16.016 [2024-07-13 07:52:21.680778] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffdd86afef0 00:06:16.016 [2024-07-13 07:52:21.680886] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.016 [2024-07-13 07:52:21.680934] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd86af650 is same with the state(5) to be set 00:06:16.016 [2024-07-13 07:52:21.680978] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2273:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffdd86af650 00:06:16.016 [2024-07-13 07:52:21.681006] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.016 [2024-07-13 07:52:21.681043] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd86af650 is same with the state(5) to be set 00:06:16.016 [2024-07-13 07:52:21.681068] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:06:16.016 [2024-07-13 07:52:21.681109] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.016 [2024-07-13 07:52:21.681168] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd86af650 is same with the state(5) to be set 00:06:16.016 [2024-07-13 07:52:21.681212] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:06:16.016 [2024-07-13 07:52:21.681251] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.016 [2024-07-13 07:52:21.681283] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd86af650 is same with the state(5) to be set 00:06:16.016 [2024-07-13 07:52:21.681310] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.016 [2024-07-13 07:52:21.681342] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd86af650 is same with the state(5) to be set 00:06:16.016 passed 00:06:16.016 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-07-13 07:52:21.681400] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.016 [2024-07-13 07:52:21.681428] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd86af650 is same with the state(5) to be set 00:06:16.016 [2024-07-13 07:52:21.681485] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.016 [2024-07-13 07:52:21.681512] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd86af650 is same with the state(5) to be set 00:06:16.016 [2024-07-13 07:52:21.681553] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.016 [2024-07-13 07:52:21.681581] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd86af650 is same with the state(5) to be set 00:06:16.016 [2024-07-13 07:52:21.681627] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.016 [2024-07-13 07:52:21.681655] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd86af650 is same with the state(5) to be set 00:06:16.016 [2024-07-13 07:52:21.681688] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.016 [2024-07-13 07:52:21.681712] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd86af650 is same with the state(5) to be set 00:06:16.016 passed 00:06:16.016 Test: test_nvmf_tcp_tls_generate_psk_id ...passed 00:06:16.016 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-13 07:52:21.703293] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:06:16.016 [2024-07-13 07:52:21.703365] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:06:16.016 passed 00:06:16.016 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-07-13 07:52:21.703597] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:06:16.016 [2024-07-13 07:52:21.703637] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:06:16.016 passed 00:06:16.016 00:06:16.016 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.016 suites 1 1 n/a 0 0 00:06:16.016 tests 17 17 17 0 0 00:06:16.016 asserts 222 222 222 0 n/a 00:06:16.016 00:06:16.016 Elapsed time = 0.190 seconds 00:06:16.016 [2024-07-13 07:52:21.703754] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:06:16.016 [2024-07-13 07:52:21.703784] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:06:16.016 07:52:21 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:06:16.016 00:06:16.016 00:06:16.016 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.016 http://cunit.sourceforge.net/ 00:06:16.016 00:06:16.016 00:06:16.016 Suite: nvmf 00:06:16.016 Test: test_nvmf_tgt_create_poll_group ...passed 00:06:16.016 00:06:16.016 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.016 suites 1 1 n/a 0 0 00:06:16.016 tests 1 1 1 0 0 00:06:16.016 asserts 17 17 17 0 n/a 00:06:16.016 00:06:16.016 Elapsed time = 0.030 seconds 00:06:16.275 ************************************ 00:06:16.275 END TEST unittest_nvmf 00:06:16.275 ************************************ 00:06:16.275 00:06:16.275 real 0m0.501s 00:06:16.275 user 0m0.230s 00:06:16.275 sys 0m0.272s 00:06:16.275 07:52:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.275 07:52:21 -- common/autotest_common.sh@10 -- # set +x 00:06:16.275 07:52:21 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:16.275 07:52:21 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:16.275 07:52:21 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:06:16.275 07:52:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:16.275 07:52:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.275 07:52:21 -- common/autotest_common.sh@10 -- # set +x 00:06:16.275 ************************************ 00:06:16.275 START TEST unittest_nvmf_rdma 00:06:16.275 ************************************ 00:06:16.275 07:52:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:06:16.275 00:06:16.275 00:06:16.275 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.275 http://cunit.sourceforge.net/ 00:06:16.275 00:06:16.275 00:06:16.275 Suite: nvmf 00:06:16.275 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-13 07:52:21.957311] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:06:16.275 [2024-07-13 07:52:21.957587] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:06:16.275 passed 00:06:16.275 Test: test_spdk_nvmf_rdma_request_process ...[2024-07-13 07:52:21.957630] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:06:16.275 passed 00:06:16.275 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:06:16.275 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:06:16.275 Test: test_nvmf_rdma_opts_init ...passed 00:06:16.275 Test: test_nvmf_rdma_request_free_data ...passed 00:06:16.275 Test: test_nvmf_rdma_update_ibv_state ...passed 00:06:16.275 Test: test_nvmf_rdma_resources_create ...passed 00:06:16.275 Test: test_nvmf_rdma_qpair_compare ...passed 00:06:16.275 Test: test_nvmf_rdma_resize_cq ...passed 00:06:16.275 00:06:16.275 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.275 suites 1 1 n/a 0 0 00:06:16.275 tests 10 10 10 0 0 00:06:16.275 asserts 584 584 584 0 n/a 00:06:16.275 00:06:16.275 Elapsed time = 0.010 seconds 00:06:16.275 [2024-07-13 07:52:21.958318] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:06:16.275 [2024-07-13 07:52:21.958356] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:06:16.275 [2024-07-13 07:52:21.959568] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1006:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:06:16.275 Using CQ of insufficient size may lead to CQ overrun 00:06:16.276 [2024-07-13 07:52:21.959669] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1011:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:06:16.276 [2024-07-13 07:52:21.959726] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:06:16.276 ************************************ 00:06:16.276 END TEST unittest_nvmf_rdma 00:06:16.276 ************************************ 00:06:16.276 00:06:16.276 real 0m0.034s 00:06:16.276 user 0m0.019s 00:06:16.276 sys 0m0.015s 00:06:16.276 07:52:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.276 07:52:21 -- common/autotest_common.sh@10 -- # set +x 00:06:16.276 07:52:22 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:16.276 07:52:22 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:06:16.276 07:52:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:16.276 07:52:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.276 07:52:22 -- common/autotest_common.sh@10 -- # set +x 00:06:16.276 ************************************ 00:06:16.276 START TEST unittest_scsi 00:06:16.276 ************************************ 00:06:16.276 07:52:22 -- common/autotest_common.sh@1104 -- # unittest_scsi 00:06:16.276 07:52:22 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:06:16.276 00:06:16.276 00:06:16.276 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.276 http://cunit.sourceforge.net/ 00:06:16.276 00:06:16.276 00:06:16.276 Suite: dev_suite 00:06:16.276 Test: dev_destruct_null_dev ...passed 00:06:16.276 Test: dev_destruct_zero_luns ...passed 00:06:16.276 Test: dev_destruct_null_lun ...passed 00:06:16.276 Test: dev_destruct_success ...passed 00:06:16.276 Test: dev_construct_num_luns_zero ...passed 00:06:16.276 Test: dev_construct_no_lun_zero ...passed 00:06:16.276 Test: dev_construct_null_lun ...passed 00:06:16.276 Test: dev_construct_name_too_long ...passed 00:06:16.276 Test: dev_construct_success ...passed 00:06:16.276 Test: dev_construct_success_lun_zero_not_first ...passed 00:06:16.276 Test: dev_queue_mgmt_task_success ...passed 00:06:16.276 Test: dev_queue_task_success ...passed 00:06:16.276 Test: dev_stop_success ...passed 00:06:16.276 Test: dev_add_port_max_ports ...passed 00:06:16.276 Test: dev_add_port_construct_failure1 ...passed 00:06:16.276 Test: dev_add_port_construct_failure2 ...passed 00:06:16.276 Test: dev_add_port_success1 ...passed 00:06:16.276 Test: dev_add_port_success2 ...passed 00:06:16.276 Test: dev_add_port_success3 ...passed 00:06:16.276 Test: dev_find_port_by_id_num_ports_zero ...passed 00:06:16.276 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:06:16.276 Test: dev_find_port_by_id_success ...passed 00:06:16.276 Test: dev_add_lun_bdev_not_found ...passed 00:06:16.276 Test: dev_add_lun_no_free_lun_id ...[2024-07-13 07:52:22.047676] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:06:16.276 [2024-07-13 07:52:22.047923] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:06:16.276 [2024-07-13 07:52:22.047958] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:06:16.276 [2024-07-13 07:52:22.048002] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:06:16.276 [2024-07-13 07:52:22.048236] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:06:16.276 [2024-07-13 07:52:22.048324] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:06:16.276 [2024-07-13 07:52:22.048411] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:06:16.276 passed 00:06:16.276 Test: dev_add_lun_success1 ...passed 00:06:16.276 Test: dev_add_lun_success2 ...passed 00:06:16.276 Test: dev_check_pending_tasks ...passed 00:06:16.276 Test: dev_iterate_luns ...passed 00:06:16.276 Test: dev_find_free_lun ...[2024-07-13 07:52:22.048815] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:06:16.276 passed 00:06:16.276 00:06:16.276 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.276 suites 1 1 n/a 0 0 00:06:16.276 tests 29 29 29 0 0 00:06:16.276 asserts 97 97 97 0 n/a 00:06:16.276 00:06:16.276 Elapsed time = 0.010 seconds 00:06:16.276 07:52:22 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:06:16.276 00:06:16.276 00:06:16.276 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.276 http://cunit.sourceforge.net/ 00:06:16.276 00:06:16.276 00:06:16.276 Suite: lun_suite 00:06:16.276 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:06:16.276 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:06:16.276 Test: lun_task_mgmt_execute_lun_reset ...passed 00:06:16.276 Test: lun_task_mgmt_execute_target_reset ...passed 00:06:16.276 Test: lun_task_mgmt_execute_invalid_case ...passed 00:06:16.276 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:06:16.276 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:06:16.276 Test: lun_append_task_null_lun_not_supported ...passed 00:06:16.276 Test: lun_execute_scsi_task_pending ...passed 00:06:16.276 Test: lun_execute_scsi_task_complete ...passed 00:06:16.276 Test: lun_execute_scsi_task_resize ...passed 00:06:16.276 Test: lun_destruct_success ...[2024-07-13 07:52:22.080211] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:06:16.276 [2024-07-13 07:52:22.080479] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:06:16.276 [2024-07-13 07:52:22.080594] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:06:16.276 passed 00:06:16.276 Test: lun_construct_null_ctx ...passed 00:06:16.276 Test: lun_construct_success ...passed 00:06:16.276 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:06:16.276 Test: lun_reset_task_suspend_scsi_task ...passed 00:06:16.276 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:06:16.276 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:06:16.276 00:06:16.276 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.276 suites 1 1 n/a 0 0 00:06:16.276 tests 18 18 18 0 0 00:06:16.276 asserts 153 153 153 0 n/a 00:06:16.276 00:06:16.276 Elapsed time = 0.000 seconds 00:06:16.276 [2024-07-13 07:52:22.080714] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:06:16.535 07:52:22 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:06:16.535 00:06:16.535 00:06:16.535 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.535 http://cunit.sourceforge.net/ 00:06:16.535 00:06:16.535 00:06:16.535 Suite: scsi_suite 00:06:16.535 Test: scsi_init ...passed 00:06:16.535 00:06:16.535 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.535 suites 1 1 n/a 0 0 00:06:16.535 tests 1 1 1 0 0 00:06:16.535 asserts 1 1 1 0 n/a 00:06:16.535 00:06:16.535 Elapsed time = 0.000 seconds 00:06:16.535 07:52:22 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:06:16.535 00:06:16.535 00:06:16.535 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.535 http://cunit.sourceforge.net/ 00:06:16.535 00:06:16.535 00:06:16.535 Suite: translation_suite 00:06:16.535 Test: mode_select_6_test ...passed 00:06:16.535 Test: mode_select_6_test2 ...passed 00:06:16.535 Test: mode_sense_6_test ...passed 00:06:16.535 Test: mode_sense_10_test ...passed 00:06:16.535 Test: inquiry_evpd_test ...passed 00:06:16.535 Test: inquiry_standard_test ...passed 00:06:16.535 Test: inquiry_overflow_test ...passed 00:06:16.535 Test: task_complete_test ...passed 00:06:16.535 Test: lba_range_test ...passed 00:06:16.535 Test: xfer_len_test ...passed 00:06:16.535 Test: xfer_test ...passed 00:06:16.535 Test: scsi_name_padding_test ...passed 00:06:16.535 Test: get_dif_ctx_test ...[2024-07-13 07:52:22.141332] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:06:16.535 passed 00:06:16.535 Test: unmap_split_test ...passed 00:06:16.535 00:06:16.535 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.535 suites 1 1 n/a 0 0 00:06:16.535 tests 14 14 14 0 0 00:06:16.535 asserts 1200 1200 1200 0 n/a 00:06:16.535 00:06:16.535 Elapsed time = 0.000 seconds 00:06:16.535 07:52:22 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:06:16.535 00:06:16.535 00:06:16.535 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.535 http://cunit.sourceforge.net/ 00:06:16.535 00:06:16.535 00:06:16.535 Suite: reservation_suite 00:06:16.535 Test: test_reservation_register ...passed 00:06:16.535 Test: test_reservation_reserve ...[2024-07-13 07:52:22.161285] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:16.535 [2024-07-13 07:52:22.161558] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:16.535 [2024-07-13 07:52:22.161613] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:06:16.535 [2024-07-13 07:52:22.161703] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:06:16.535 passed 00:06:16.535 Test: test_reservation_preempt_non_all_regs ...passed 00:06:16.535 Test: test_reservation_preempt_all_regs ...[2024-07-13 07:52:22.161750] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:16.535 [2024-07-13 07:52:22.161802] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:06:16.535 [2024-07-13 07:52:22.161907] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:16.535 passed 00:06:16.535 Test: test_reservation_cmds_conflict ...[2024-07-13 07:52:22.162004] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:16.535 [2024-07-13 07:52:22.162060] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:06:16.535 [2024-07-13 07:52:22.162104] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:06:16.535 [2024-07-13 07:52:22.162130] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:06:16.535 [2024-07-13 07:52:22.162162] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:06:16.535 [2024-07-13 07:52:22.162186] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:06:16.535 passed 00:06:16.535 Test: test_scsi2_reserve_release ...passed 00:06:16.535 Test: test_pr_with_scsi2_reserve_release ...[2024-07-13 07:52:22.162272] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:16.535 passed 00:06:16.535 00:06:16.535 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.535 suites 1 1 n/a 0 0 00:06:16.535 tests 7 7 7 0 0 00:06:16.535 asserts 257 257 257 0 n/a 00:06:16.535 00:06:16.535 Elapsed time = 0.000 seconds 00:06:16.535 ************************************ 00:06:16.535 END TEST unittest_scsi 00:06:16.535 ************************************ 00:06:16.535 00:06:16.535 real 0m0.141s 00:06:16.535 user 0m0.070s 00:06:16.535 sys 0m0.072s 00:06:16.535 07:52:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.535 07:52:22 -- common/autotest_common.sh@10 -- # set +x 00:06:16.535 07:52:22 -- unit/unittest.sh@276 -- # uname -s 00:06:16.535 07:52:22 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:06:16.535 07:52:22 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:06:16.535 07:52:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:16.535 07:52:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.535 07:52:22 -- common/autotest_common.sh@10 -- # set +x 00:06:16.535 ************************************ 00:06:16.535 START TEST unittest_sock 00:06:16.535 ************************************ 00:06:16.535 07:52:22 -- common/autotest_common.sh@1104 -- # unittest_sock 00:06:16.535 07:52:22 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:06:16.535 00:06:16.535 00:06:16.535 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.535 http://cunit.sourceforge.net/ 00:06:16.535 00:06:16.535 00:06:16.535 Suite: sock 00:06:16.535 Test: posix_sock ...passed 00:06:16.535 Test: ut_sock ...passed 00:06:16.535 Test: posix_sock_group ...passed 00:06:16.535 Test: ut_sock_group ...passed 00:06:16.535 Test: posix_sock_group_fairness ...passed 00:06:16.535 Test: _posix_sock_close ...passed 00:06:16.535 Test: sock_get_default_opts ...passed 00:06:16.535 Test: ut_sock_impl_get_set_opts ...passed 00:06:16.535 Test: posix_sock_impl_get_set_opts ...passed 00:06:16.535 Test: ut_sock_map ...passed 00:06:16.535 Test: override_impl_opts ...passed 00:06:16.535 Test: ut_sock_group_get_ctx ...passed 00:06:16.535 00:06:16.535 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.536 suites 1 1 n/a 0 0 00:06:16.536 tests 12 12 12 0 0 00:06:16.536 asserts 349 349 349 0 n/a 00:06:16.536 00:06:16.536 Elapsed time = 0.010 seconds 00:06:16.536 07:52:22 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:06:16.536 00:06:16.536 00:06:16.536 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.536 http://cunit.sourceforge.net/ 00:06:16.536 00:06:16.536 00:06:16.536 Suite: posix 00:06:16.536 Test: flush ...passed 00:06:16.536 00:06:16.536 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.536 suites 1 1 n/a 0 0 00:06:16.536 tests 1 1 1 0 0 00:06:16.536 asserts 28 28 28 0 n/a 00:06:16.536 00:06:16.536 Elapsed time = 0.000 seconds 00:06:16.536 07:52:22 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:16.536 ************************************ 00:06:16.536 END TEST unittest_sock 00:06:16.536 ************************************ 00:06:16.536 00:06:16.536 real 0m0.090s 00:06:16.536 user 0m0.033s 00:06:16.536 sys 0m0.035s 00:06:16.536 07:52:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.536 07:52:22 -- common/autotest_common.sh@10 -- # set +x 00:06:16.536 07:52:22 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:06:16.536 07:52:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:16.536 07:52:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.536 07:52:22 -- common/autotest_common.sh@10 -- # set +x 00:06:16.795 ************************************ 00:06:16.795 START TEST unittest_thread 00:06:16.795 ************************************ 00:06:16.795 07:52:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:06:16.795 00:06:16.795 00:06:16.795 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.795 http://cunit.sourceforge.net/ 00:06:16.795 00:06:16.795 00:06:16.795 Suite: io_channel 00:06:16.795 Test: thread_alloc ...passed 00:06:16.795 Test: thread_send_msg ...passed 00:06:16.795 Test: thread_poller ...passed 00:06:16.795 Test: poller_pause ...passed 00:06:16.795 Test: thread_for_each ...passed 00:06:16.795 Test: for_each_channel_remove ...passed 00:06:16.795 Test: for_each_channel_unreg ...[2024-07-13 07:52:22.380299] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7fff0dea8c10 already registered (old:0x613000000200 new:0x6130000003c0) 00:06:16.795 passed 00:06:16.795 Test: thread_name ...passed 00:06:16.795 Test: channel ...passed 00:06:16.795 Test: channel_destroy_races ...[2024-07-13 07:52:22.383031] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x48e820 00:06:16.795 passed 00:06:16.795 Test: thread_exit_test ...[2024-07-13 07:52:22.386515] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:06:16.795 passed 00:06:16.795 Test: thread_update_stats_test ...passed 00:06:16.795 Test: nested_channel ...passed 00:06:16.795 Test: device_unregister_and_thread_exit_race ...passed 00:06:16.795 Test: cache_closest_timed_poller ...passed 00:06:16.795 Test: multi_timed_pollers_have_same_expiration ...passed 00:06:16.795 Test: io_device_lookup ...passed 00:06:16.795 Test: spdk_spin ...[2024-07-13 07:52:22.393612] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:06:16.795 [2024-07-13 07:52:22.393668] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff0dea8bf0 00:06:16.795 [2024-07-13 07:52:22.393748] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:06:16.795 [2024-07-13 07:52:22.394954] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:16.795 [2024-07-13 07:52:22.395026] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff0dea8bf0 00:06:16.795 [2024-07-13 07:52:22.395057] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:06:16.795 [2024-07-13 07:52:22.395090] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff0dea8bf0 00:06:16.795 [2024-07-13 07:52:22.395120] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:06:16.795 [2024-07-13 07:52:22.395154] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff0dea8bf0 00:06:16.795 [2024-07-13 07:52:22.395185] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:06:16.795 [2024-07-13 07:52:22.395229] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff0dea8bf0 00:06:16.795 passed 00:06:16.795 Test: for_each_channel_and_thread_exit_race ...passed 00:06:16.795 Test: for_each_thread_and_thread_exit_race ...passed 00:06:16.795 00:06:16.795 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.795 suites 1 1 n/a 0 0 00:06:16.795 tests 20 20 20 0 0 00:06:16.795 asserts 409 409 409 0 n/a 00:06:16.795 00:06:16.795 Elapsed time = 0.030 seconds 00:06:16.795 00:06:16.795 real 0m0.069s 00:06:16.795 user 0m0.049s 00:06:16.795 sys 0m0.020s 00:06:16.795 07:52:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.795 07:52:22 -- common/autotest_common.sh@10 -- # set +x 00:06:16.795 ************************************ 00:06:16.795 END TEST unittest_thread 00:06:16.795 ************************************ 00:06:16.795 07:52:22 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:06:16.795 07:52:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:16.795 07:52:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.795 07:52:22 -- common/autotest_common.sh@10 -- # set +x 00:06:16.795 ************************************ 00:06:16.795 START TEST unittest_iobuf 00:06:16.795 ************************************ 00:06:16.795 07:52:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:06:16.795 00:06:16.795 00:06:16.795 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.795 http://cunit.sourceforge.net/ 00:06:16.795 00:06:16.795 00:06:16.795 Suite: io_channel 00:06:16.795 Test: iobuf ...passed 00:06:16.795 Test: iobuf_cache ...passed 00:06:16.795 00:06:16.795 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.795 suites 1 1 n/a 0 0 00:06:16.795 tests 2 2 2 0 0 00:06:16.795 asserts 107 107 107 0 n/a 00:06:16.795 00:06:16.795 Elapsed time = 0.010 seconds 00:06:16.795 [2024-07-13 07:52:22.483855] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:06:16.795 [2024-07-13 07:52:22.484100] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:16.795 [2024-07-13 07:52:22.484216] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:06:16.795 [2024-07-13 07:52:22.484253] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:16.795 [2024-07-13 07:52:22.484301] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:06:16.795 [2024-07-13 07:52:22.484339] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:16.795 ************************************ 00:06:16.795 END TEST unittest_iobuf 00:06:16.795 ************************************ 00:06:16.795 00:06:16.795 real 0m0.035s 00:06:16.795 user 0m0.018s 00:06:16.795 sys 0m0.018s 00:06:16.795 07:52:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.795 07:52:22 -- common/autotest_common.sh@10 -- # set +x 00:06:16.795 07:52:22 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:06:16.795 07:52:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:16.795 07:52:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.795 07:52:22 -- common/autotest_common.sh@10 -- # set +x 00:06:16.795 ************************************ 00:06:16.795 START TEST unittest_util 00:06:16.795 ************************************ 00:06:16.795 07:52:22 -- common/autotest_common.sh@1104 -- # unittest_util 00:06:16.795 07:52:22 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:06:16.795 00:06:16.795 00:06:16.795 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.795 http://cunit.sourceforge.net/ 00:06:16.795 00:06:16.795 00:06:16.795 Suite: base64 00:06:16.795 Test: test_base64_get_encoded_strlen ...passed 00:06:16.795 Test: test_base64_get_decoded_len ...passed 00:06:16.795 Test: test_base64_encode ...passed 00:06:16.795 Test: test_base64_decode ...passed 00:06:16.795 Test: test_base64_urlsafe_encode ...passed 00:06:16.795 Test: test_base64_urlsafe_decode ...passed 00:06:16.795 00:06:16.795 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.795 suites 1 1 n/a 0 0 00:06:16.795 tests 6 6 6 0 0 00:06:16.795 asserts 112 112 112 0 n/a 00:06:16.795 00:06:16.795 Elapsed time = 0.000 seconds 00:06:16.795 07:52:22 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:06:16.795 00:06:16.795 00:06:16.795 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.795 http://cunit.sourceforge.net/ 00:06:16.795 00:06:16.795 00:06:16.795 Suite: bit_array 00:06:16.795 Test: test_1bit ...passed 00:06:16.795 Test: test_64bit ...passed 00:06:16.795 Test: test_find ...passed 00:06:16.796 Test: test_resize ...passed 00:06:16.796 Test: test_errors ...passed 00:06:16.796 Test: test_count ...passed 00:06:16.796 Test: test_mask_store_load ...passed 00:06:16.796 Test: test_mask_clear ...passed 00:06:16.796 00:06:16.796 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.796 suites 1 1 n/a 0 0 00:06:16.796 tests 8 8 8 0 0 00:06:16.796 asserts 5075 5075 5075 0 n/a 00:06:16.796 00:06:16.796 Elapsed time = 0.000 seconds 00:06:16.796 07:52:22 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:06:16.796 00:06:16.796 00:06:16.796 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.796 http://cunit.sourceforge.net/ 00:06:16.796 00:06:16.796 00:06:16.796 Suite: cpuset 00:06:16.796 Test: test_cpuset ...passed 00:06:16.796 Test: test_cpuset_parse ...[2024-07-13 07:52:22.602503] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:06:16.796 [2024-07-13 07:52:22.602713] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:06:16.796 [2024-07-13 07:52:22.602784] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:06:16.796 passed 00:06:16.796 Test: test_cpuset_fmt ...passed 00:06:16.796 00:06:16.796 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.796 suites 1 1 n/a 0 0 00:06:16.796 tests 3 3 3 0 0 00:06:16.796 asserts 65 65 65 0 n/a 00:06:16.796 00:06:16.796 Elapsed time = 0.000 seconds 00:06:16.796 [2024-07-13 07:52:22.602862] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:06:16.796 [2024-07-13 07:52:22.602890] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:06:16.796 [2024-07-13 07:52:22.602924] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:06:16.796 [2024-07-13 07:52:22.602950] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:06:16.796 [2024-07-13 07:52:22.602995] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:06:17.055 07:52:22 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:06:17.055 00:06:17.055 00:06:17.055 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.055 http://cunit.sourceforge.net/ 00:06:17.055 00:06:17.055 00:06:17.055 Suite: crc16 00:06:17.055 Test: test_crc16_t10dif ...passed 00:06:17.055 Test: test_crc16_t10dif_seed ...passed 00:06:17.055 Test: test_crc16_t10dif_copy ...passed 00:06:17.055 00:06:17.055 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.055 suites 1 1 n/a 0 0 00:06:17.055 tests 3 3 3 0 0 00:06:17.055 asserts 5 5 5 0 n/a 00:06:17.055 00:06:17.055 Elapsed time = 0.000 seconds 00:06:17.055 07:52:22 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:06:17.055 00:06:17.055 00:06:17.055 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.055 http://cunit.sourceforge.net/ 00:06:17.055 00:06:17.055 00:06:17.055 Suite: crc32_ieee 00:06:17.055 Test: test_crc32_ieee ...passed 00:06:17.055 00:06:17.055 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.055 suites 1 1 n/a 0 0 00:06:17.055 tests 1 1 1 0 0 00:06:17.055 asserts 1 1 1 0 n/a 00:06:17.055 00:06:17.055 Elapsed time = 0.000 seconds 00:06:17.055 07:52:22 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:06:17.055 00:06:17.055 00:06:17.055 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.055 http://cunit.sourceforge.net/ 00:06:17.055 00:06:17.055 00:06:17.055 Suite: crc32c 00:06:17.055 Test: test_crc32c ...passed 00:06:17.055 Test: test_crc32c_nvme ...passed 00:06:17.055 00:06:17.055 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.055 suites 1 1 n/a 0 0 00:06:17.056 tests 2 2 2 0 0 00:06:17.056 asserts 16 16 16 0 n/a 00:06:17.056 00:06:17.056 Elapsed time = 0.000 seconds 00:06:17.056 07:52:22 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:06:17.056 00:06:17.056 00:06:17.056 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.056 http://cunit.sourceforge.net/ 00:06:17.056 00:06:17.056 00:06:17.056 Suite: crc64 00:06:17.056 Test: test_crc64_nvme ...passed 00:06:17.056 00:06:17.056 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.056 suites 1 1 n/a 0 0 00:06:17.056 tests 1 1 1 0 0 00:06:17.056 asserts 4 4 4 0 n/a 00:06:17.056 00:06:17.056 Elapsed time = 0.000 seconds 00:06:17.056 07:52:22 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:06:17.056 00:06:17.056 00:06:17.056 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.056 http://cunit.sourceforge.net/ 00:06:17.056 00:06:17.056 00:06:17.056 Suite: string 00:06:17.056 Test: test_parse_ip_addr ...passed 00:06:17.056 Test: test_str_chomp ...passed 00:06:17.056 Test: test_parse_capacity ...passed 00:06:17.056 Test: test_sprintf_append_realloc ...passed 00:06:17.056 Test: test_strtol ...passed 00:06:17.056 Test: test_strtoll ...passed 00:06:17.056 Test: test_strarray ...passed 00:06:17.056 Test: test_strcpy_replace ...passed 00:06:17.056 00:06:17.056 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.056 suites 1 1 n/a 0 0 00:06:17.056 tests 8 8 8 0 0 00:06:17.056 asserts 161 161 161 0 n/a 00:06:17.056 00:06:17.056 Elapsed time = 0.000 seconds 00:06:17.056 07:52:22 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:06:17.056 00:06:17.056 00:06:17.056 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.056 http://cunit.sourceforge.net/ 00:06:17.056 00:06:17.056 00:06:17.056 Suite: dif 00:06:17.056 Test: dif_generate_and_verify_test ...passed 00:06:17.056 Test: dif_disable_check_test ...[2024-07-13 07:52:22.730677] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:17.056 [2024-07-13 07:52:22.731033] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:17.056 [2024-07-13 07:52:22.731223] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:17.056 [2024-07-13 07:52:22.731386] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:17.056 [2024-07-13 07:52:22.731534] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:17.056 [2024-07-13 07:52:22.731725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:17.056 [2024-07-13 07:52:22.732476] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:17.056 [2024-07-13 07:52:22.732822] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:17.056 [2024-07-13 07:52:22.733018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:17.056 passed 00:06:17.056 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-13 07:52:22.733752] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:06:17.056 [2024-07-13 07:52:22.734139] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:06:17.056 [2024-07-13 07:52:22.734346] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:06:17.056 [2024-07-13 07:52:22.734813] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:06:17.056 [2024-07-13 07:52:22.734965] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:17.056 [2024-07-13 07:52:22.735102] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:17.056 [2024-07-13 07:52:22.735471] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:17.056 [2024-07-13 07:52:22.735597] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:17.056 [2024-07-13 07:52:22.735828] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:17.056 [2024-07-13 07:52:22.735968] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:17.056 [2024-07-13 07:52:22.736127] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:17.056 passed 00:06:17.056 Test: dif_apptag_mask_test ...[2024-07-13 07:52:22.736406] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:06:17.056 [2024-07-13 07:52:22.736727] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:06:17.056 passed 00:06:17.056 Test: dif_sec_512_md_0_error_test ...passed 00:06:17.056 Test: dif_sec_4096_md_0_error_test ...[2024-07-13 07:52:22.736814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:17.056 passed 00:06:17.056 Test: dif_sec_4100_md_128_error_test ...passed 00:06:17.056 Test: dif_guard_seed_test ...passed 00:06:17.056 Test: dif_guard_value_test ...[2024-07-13 07:52:22.736982] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:17.056 [2024-07-13 07:52:22.737103] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:17.056 [2024-07-13 07:52:22.737147] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:06:17.056 [2024-07-13 07:52:22.737174] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:06:17.056 passed 00:06:17.056 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:06:17.056 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:06:17.056 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:17.056 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:17.056 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:17.056 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:06:17.056 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:06:17.056 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:06:17.056 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:06:17.056 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:17.056 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:06:17.056 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:06:17.056 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:06:17.056 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:06:17.056 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:06:17.056 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:06:17.056 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:17.056 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:17.056 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-13 07:52:22.771236] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd44, Actual=fd4c 00:06:17.056 [2024-07-13 07:52:22.773050] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fe29, Actual=fe21 00:06:17.056 [2024-07-13 07:52:22.774856] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.056 [2024-07-13 07:52:22.776639] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.056 [2024-07-13 07:52:22.778419] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:06:17.056 [2024-07-13 07:52:22.781010] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:06:17.056 [2024-07-13 07:52:22.783552] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=e484 00:06:17.056 [2024-07-13 07:52:22.785172] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fe21, Actual=eb7a 00:06:17.056 [2024-07-13 07:52:22.786430] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1abf53ed, Actual=1ab753ed 00:06:17.056 [2024-07-13 07:52:22.788921] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=385f4660, Actual=38574660 00:06:17.056 [2024-07-13 07:52:22.791564] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.056 [2024-07-13 07:52:22.792869] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.056 [2024-07-13 07:52:22.794165] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:06:17.056 [2024-07-13 07:52:22.795453] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:06:17.056 [2024-07-13 07:52:22.796770] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=6da78133 00:06:17.056 [2024-07-13 07:52:22.797545] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=38574660, Actual=f03acc2d 00:06:17.056 [2024-07-13 07:52:22.798960] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ec420d3, Actual=a576a7728ecc20d3 00:06:17.056 [2024-07-13 07:52:22.800909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=88010a2d483fa266, Actual=88010a2d4837a266 00:06:17.056 [2024-07-13 07:52:22.802932] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.056 [2024-07-13 07:52:22.804895] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.056 [2024-07-13 07:52:22.806807] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800000059 00:06:17.056 [2024-07-13 07:52:22.808747] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800000059 00:06:17.056 passed 00:06:17.056 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-13 07:52:22.810672] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=25c8174803de7171 00:06:17.056 [2024-07-13 07:52:22.812100] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=88010a2d4837a266, Actual=1bb94d99052271fc 00:06:17.056 [2024-07-13 07:52:22.812351] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd44, Actual=fd4c 00:06:17.057 [2024-07-13 07:52:22.812571] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe29, Actual=fe21 00:06:17.057 [2024-07-13 07:52:22.812765] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.057 [2024-07-13 07:52:22.812967] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.057 [2024-07-13 07:52:22.813203] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:06:17.057 [2024-07-13 07:52:22.813400] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:06:17.057 [2024-07-13 07:52:22.813862] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e484 00:06:17.057 [2024-07-13 07:52:22.814098] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=eb7a 00:06:17.057 [2024-07-13 07:52:22.814259] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1abf53ed, Actual=1ab753ed 00:06:17.057 [2024-07-13 07:52:22.814441] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=385f4660, Actual=38574660 00:06:17.057 [2024-07-13 07:52:22.814703] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.057 passed 00:06:17.057 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-13 07:52:22.814925] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.057 [2024-07-13 07:52:22.815131] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:06:17.057 [2024-07-13 07:52:22.815327] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:06:17.057 [2024-07-13 07:52:22.815540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6da78133 00:06:17.057 [2024-07-13 07:52:22.815685] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f03acc2d 00:06:17.057 [2024-07-13 07:52:22.815958] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ec420d3, Actual=a576a7728ecc20d3 00:06:17.057 [2024-07-13 07:52:22.816261] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d483fa266, Actual=88010a2d4837a266 00:06:17.057 [2024-07-13 07:52:22.816605] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.057 [2024-07-13 07:52:22.816928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.057 [2024-07-13 07:52:22.817274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000058 00:06:17.057 [2024-07-13 07:52:22.817635] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000058 00:06:17.057 [2024-07-13 07:52:22.817987] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=25c8174803de7171 00:06:17.057 [2024-07-13 07:52:22.818269] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=1bb94d99052271fc 00:06:17.057 [2024-07-13 07:52:22.818560] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd44, Actual=fd4c 00:06:17.057 [2024-07-13 07:52:22.818894] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe29, Actual=fe21 00:06:17.057 [2024-07-13 07:52:22.819196] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.057 [2024-07-13 07:52:22.819542] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.057 [2024-07-13 07:52:22.819859] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:06:17.057 [2024-07-13 07:52:22.820162] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:06:17.057 [2024-07-13 07:52:22.820476] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e484 00:06:17.057 [2024-07-13 07:52:22.820725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=eb7a 00:06:17.057 [2024-07-13 07:52:22.820877] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1abf53ed, Actual=1ab753ed 00:06:17.057 [2024-07-13 07:52:22.821088] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=385f4660, Actual=38574660 00:06:17.057 [2024-07-13 07:52:22.821295] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.057 [2024-07-13 07:52:22.821525] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.057 [2024-07-13 07:52:22.821746] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:06:17.057 [2024-07-13 07:52:22.821968] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:06:17.057 [2024-07-13 07:52:22.822177] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6da78133 00:06:17.057 [2024-07-13 07:52:22.822328] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f03acc2d 00:06:17.057 [2024-07-13 07:52:22.822638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ec420d3, Actual=a576a7728ecc20d3 00:06:17.057 [2024-07-13 07:52:22.822972] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d483fa266, Actual=88010a2d4837a266 00:06:17.057 [2024-07-13 07:52:22.823319] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.057 [2024-07-13 07:52:22.823666] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.057 [2024-07-13 07:52:22.824004] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000058 00:06:17.057 [2024-07-13 07:52:22.824332] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000058 00:06:17.057 passed 00:06:17.057 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-13 07:52:22.824929] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=25c8174803de7171 00:06:17.057 [2024-07-13 07:52:22.825218] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=1bb94d99052271fc 00:06:17.057 [2024-07-13 07:52:22.825507] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd44, Actual=fd4c 00:06:17.057 [2024-07-13 07:52:22.825825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe29, Actual=fe21 00:06:17.057 [2024-07-13 07:52:22.826138] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.057 [2024-07-13 07:52:22.826431] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.057 [2024-07-13 07:52:22.826782] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:06:17.057 [2024-07-13 07:52:22.827089] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:06:17.057 [2024-07-13 07:52:22.827396] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e484 00:06:17.057 [2024-07-13 07:52:22.827653] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=eb7a 00:06:17.057 [2024-07-13 07:52:22.827819] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1abf53ed, Actual=1ab753ed 00:06:17.057 [2024-07-13 07:52:22.828037] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=385f4660, Actual=38574660 00:06:17.057 [2024-07-13 07:52:22.828269] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.057 [2024-07-13 07:52:22.828510] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.057 [2024-07-13 07:52:22.828721] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:06:17.057 [2024-07-13 07:52:22.828941] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:06:17.057 [2024-07-13 07:52:22.829157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6da78133 00:06:17.057 [2024-07-13 07:52:22.829315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f03acc2d 00:06:17.057 [2024-07-13 07:52:22.829612] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ec420d3, Actual=a576a7728ecc20d3 00:06:17.057 [2024-07-13 07:52:22.829950] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d483fa266, Actual=88010a2d4837a266 00:06:17.057 [2024-07-13 07:52:22.830279] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.057 [2024-07-13 07:52:22.830635] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.057 [2024-07-13 07:52:22.830984] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000058 00:06:17.057 [2024-07-13 07:52:22.831333] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000058 00:06:17.057 [2024-07-13 07:52:22.831701] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=25c8174803de7171 00:06:17.057 passed 00:06:17.057 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...passed 00:06:17.057 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-13 07:52:22.831991] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=1bb94d99052271fc 00:06:17.057 [2024-07-13 07:52:22.832199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd44, Actual=fd4c 00:06:17.057 [2024-07-13 07:52:22.832432] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe29, Actual=fe21 00:06:17.057 [2024-07-13 07:52:22.832733] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.057 [2024-07-13 07:52:22.832975] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.057 [2024-07-13 07:52:22.833240] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:06:17.057 [2024-07-13 07:52:22.833551] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:06:17.057 [2024-07-13 07:52:22.833800] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e484 00:06:17.057 [2024-07-13 07:52:22.834048] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=eb7a 00:06:17.057 [2024-07-13 07:52:22.834261] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1abf53ed, Actual=1ab753ed 00:06:17.057 [2024-07-13 07:52:22.834492] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=385f4660, Actual=38574660 00:06:17.057 [2024-07-13 07:52:22.834734] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.058 [2024-07-13 07:52:22.834947] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.058 [2024-07-13 07:52:22.835179] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:06:17.058 [2024-07-13 07:52:22.835389] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:06:17.058 [2024-07-13 07:52:22.835825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6da78133 00:06:17.058 [2024-07-13 07:52:22.835992] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f03acc2d 00:06:17.058 [2024-07-13 07:52:22.836339] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ec420d3, Actual=a576a7728ecc20d3 00:06:17.058 [2024-07-13 07:52:22.836702] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d483fa266, Actual=88010a2d4837a266 00:06:17.058 [2024-07-13 07:52:22.837032] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.058 [2024-07-13 07:52:22.837367] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.058 [2024-07-13 07:52:22.837708] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000058 00:06:17.058 [2024-07-13 07:52:22.838049] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000058 00:06:17.058 passed 00:06:17.058 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-13 07:52:22.838411] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=25c8174803de7171 00:06:17.058 [2024-07-13 07:52:22.838708] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=1bb94d99052271fc 00:06:17.058 [2024-07-13 07:52:22.838953] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd44, Actual=fd4c 00:06:17.058 [2024-07-13 07:52:22.839248] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe29, Actual=fe21 00:06:17.058 [2024-07-13 07:52:22.839548] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.058 [2024-07-13 07:52:22.839841] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.058 passed 00:06:17.058 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-13 07:52:22.840180] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:06:17.058 [2024-07-13 07:52:22.840506] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:06:17.058 [2024-07-13 07:52:22.840815] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e484 00:06:17.058 [2024-07-13 07:52:22.841052] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=eb7a 00:06:17.058 [2024-07-13 07:52:22.841252] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1abf53ed, Actual=1ab753ed 00:06:17.058 [2024-07-13 07:52:22.841440] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=385f4660, Actual=38574660 00:06:17.058 [2024-07-13 07:52:22.841677] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.058 [2024-07-13 07:52:22.841875] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.058 [2024-07-13 07:52:22.842084] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:06:17.058 [2024-07-13 07:52:22.842279] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80058 00:06:17.058 [2024-07-13 07:52:22.842548] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6da78133 00:06:17.058 [2024-07-13 07:52:22.842701] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f03acc2d 00:06:17.058 [2024-07-13 07:52:22.843028] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ec420d3, Actual=a576a7728ecc20d3 00:06:17.058 [2024-07-13 07:52:22.843353] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d483fa266, Actual=88010a2d4837a266 00:06:17.058 [2024-07-13 07:52:22.843705] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.058 [2024-07-13 07:52:22.844042] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:06:17.058 [2024-07-13 07:52:22.844386] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000058 00:06:17.058 passed 00:06:17.058 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:06:17.058 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...[2024-07-13 07:52:22.844738] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000058 00:06:17.058 [2024-07-13 07:52:22.845110] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=25c8174803de7171 00:06:17.058 [2024-07-13 07:52:22.845403] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=1bb94d99052271fc 00:06:17.058 passed 00:06:17.058 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:06:17.058 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:17.317 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:06:17.317 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:06:17.317 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:17.317 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:06:17.317 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:17.317 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-13 07:52:22.877043] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd44, Actual=fd4c 00:06:17.317 [2024-07-13 07:52:22.879089] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=412a, Actual=4122 00:06:17.317 [2024-07-13 07:52:22.881064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.317 [2024-07-13 07:52:22.883064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.318 [2024-07-13 07:52:22.885018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:06:17.318 [2024-07-13 07:52:22.886986] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:06:17.318 [2024-07-13 07:52:22.888044] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=e484 00:06:17.318 [2024-07-13 07:52:22.888969] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=879b 00:06:17.318 [2024-07-13 07:52:22.889682] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1abf53ed, Actual=1ab753ed 00:06:17.318 [2024-07-13 07:52:22.890383] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=91201fdf, Actual=91281fdf 00:06:17.318 [2024-07-13 07:52:22.891107] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.318 [2024-07-13 07:52:22.891859] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.318 [2024-07-13 07:52:22.892568] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:06:17.318 [2024-07-13 07:52:22.893274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:06:17.318 [2024-07-13 07:52:22.893982] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=6da78133 00:06:17.318 [2024-07-13 07:52:22.894725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=53d0da20 00:06:17.318 [2024-07-13 07:52:22.896053] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ec420d3, Actual=a576a7728ecc20d3 00:06:17.318 [2024-07-13 07:52:22.897573] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=7e3ab4023ac66243, Actual=7e3ab4023ace6243 00:06:17.318 [2024-07-13 07:52:22.898909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.318 [2024-07-13 07:52:22.900266] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.318 [2024-07-13 07:52:22.901595] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800000059 00:06:17.318 [2024-07-13 07:52:22.902944] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800000059 00:06:17.318 [2024-07-13 07:52:22.904274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=25c8174803de7171 00:06:17.318 passed 00:06:17.318 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-13 07:52:22.905651] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=db90a7df0451e6f5 00:06:17.318 [2024-07-13 07:52:22.905960] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd44, Actual=fd4c 00:06:17.318 [2024-07-13 07:52:22.906222] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=412a, Actual=4122 00:06:17.318 [2024-07-13 07:52:22.906492] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.318 [2024-07-13 07:52:22.906753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.318 [2024-07-13 07:52:22.907031] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:06:17.318 [2024-07-13 07:52:22.907288] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:06:17.318 [2024-07-13 07:52:22.907543] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=e484 00:06:17.318 [2024-07-13 07:52:22.907798] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=879b 00:06:17.318 [2024-07-13 07:52:22.907980] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1abf53ed, Actual=1ab753ed 00:06:17.318 [2024-07-13 07:52:22.908164] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=91201fdf, Actual=91281fdf 00:06:17.318 [2024-07-13 07:52:22.908352] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.318 [2024-07-13 07:52:22.908547] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.318 [2024-07-13 07:52:22.908722] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:06:17.318 [2024-07-13 07:52:22.908911] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:06:17.318 [2024-07-13 07:52:22.909091] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=6da78133 00:06:17.318 [2024-07-13 07:52:22.909273] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=53d0da20 00:06:17.318 [2024-07-13 07:52:22.909684] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ec420d3, Actual=a576a7728ecc20d3 00:06:17.318 [2024-07-13 07:52:22.910070] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=7e3ab4023ac66243, Actual=7e3ab4023ace6243 00:06:17.318 [2024-07-13 07:52:22.910588] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.318 [2024-07-13 07:52:22.911001] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.318 [2024-07-13 07:52:22.911401] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800000059 00:06:17.318 [2024-07-13 07:52:22.911812] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800000059 00:06:17.318 [2024-07-13 07:52:22.912223] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=25c8174803de7171 00:06:17.318 passed 00:06:17.318 Test: dix_sec_512_md_0_error ...passed 00:06:17.318 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-07-13 07:52:22.912639] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=db90a7df0451e6f5 00:06:17.318 [2024-07-13 07:52:22.912692] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:17.318 passed 00:06:17.318 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:17.318 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:06:17.318 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:17.318 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:06:17.318 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:06:17.318 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:17.318 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:06:17.318 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:17.318 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-13 07:52:22.942449] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd44, Actual=fd4c 00:06:17.318 [2024-07-13 07:52:22.944161] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=412a, Actual=4122 00:06:17.318 [2024-07-13 07:52:22.945830] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.318 [2024-07-13 07:52:22.947488] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.318 [2024-07-13 07:52:22.948698] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:06:17.318 [2024-07-13 07:52:22.950757] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:06:17.318 [2024-07-13 07:52:22.952727] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=e484 00:06:17.318 [2024-07-13 07:52:22.954695] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=879b 00:06:17.318 [2024-07-13 07:52:22.956209] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1abf53ed, Actual=1ab753ed 00:06:17.318 [2024-07-13 07:52:22.957743] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=91201fdf, Actual=91281fdf 00:06:17.318 [2024-07-13 07:52:22.959303] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.318 [2024-07-13 07:52:22.960842] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.318 [2024-07-13 07:52:22.962318] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:06:17.318 [2024-07-13 07:52:22.963652] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:06:17.318 [2024-07-13 07:52:22.964350] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=6da78133 00:06:17.318 [2024-07-13 07:52:22.965068] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=53d0da20 00:06:17.318 [2024-07-13 07:52:22.966402] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ec420d3, Actual=a576a7728ecc20d3 00:06:17.318 [2024-07-13 07:52:22.967755] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=7e3ab4023ac66243, Actual=7e3ab4023ace6243 00:06:17.318 [2024-07-13 07:52:22.969078] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.318 [2024-07-13 07:52:22.970414] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.318 [2024-07-13 07:52:22.971753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800000059 00:06:17.318 [2024-07-13 07:52:22.973085] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800000059 00:06:17.318 [2024-07-13 07:52:22.974586] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=25c8174803de7171 00:06:17.318 passed 00:06:17.318 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-13 07:52:22.975954] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=db90a7df0451e6f5 00:06:17.318 [2024-07-13 07:52:22.976259] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd44, Actual=fd4c 00:06:17.318 [2024-07-13 07:52:22.976548] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=412a, Actual=4122 00:06:17.318 [2024-07-13 07:52:22.976804] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.318 [2024-07-13 07:52:22.977054] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.318 [2024-07-13 07:52:22.977329] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:06:17.318 [2024-07-13 07:52:22.977593] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:06:17.318 [2024-07-13 07:52:22.977858] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=e484 00:06:17.319 [2024-07-13 07:52:22.978107] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=879b 00:06:17.319 [2024-07-13 07:52:22.978289] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1abf53ed, Actual=1ab753ed 00:06:17.319 [2024-07-13 07:52:22.978475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=91201fdf, Actual=91281fdf 00:06:17.319 [2024-07-13 07:52:22.978670] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.319 [2024-07-13 07:52:22.978856] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.319 [2024-07-13 07:52:22.979043] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:06:17.319 [2024-07-13 07:52:22.979228] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80059 00:06:17.319 [2024-07-13 07:52:22.979401] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=6da78133 00:06:17.319 [2024-07-13 07:52:22.979593] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=53d0da20 00:06:17.319 [2024-07-13 07:52:22.979990] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ec420d3, Actual=a576a7728ecc20d3 00:06:17.319 passed 00:06:17.319 Test: set_md_interleave_iovs_test ...[2024-07-13 07:52:22.980389] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=7e3ab4023ac66243, Actual=7e3ab4023ace6243 00:06:17.319 [2024-07-13 07:52:22.980794] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.319 [2024-07-13 07:52:22.981186] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=80 00:06:17.319 [2024-07-13 07:52:22.981576] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800000059 00:06:17.319 [2024-07-13 07:52:22.981968] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800000059 00:06:17.319 [2024-07-13 07:52:22.982349] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=25c8174803de7171 00:06:17.319 [2024-07-13 07:52:22.982748] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=db90a7df0451e6f5 00:06:17.319 passed 00:06:17.319 Test: set_md_interleave_iovs_split_test ...passed 00:06:17.319 Test: dif_generate_stream_pi_16_test ...passed 00:06:17.319 Test: dif_generate_stream_test ...passed 00:06:17.319 Test: set_md_interleave_iovs_alignment_test ...passed 00:06:17.319 Test: dif_generate_split_test ...[2024-07-13 07:52:22.988614] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:06:17.319 passed 00:06:17.319 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:06:17.319 Test: dif_verify_split_test ...passed 00:06:17.319 Test: dif_verify_stream_multi_segments_test ...passed 00:06:17.319 Test: update_crc32c_pi_16_test ...passed 00:06:17.319 Test: update_crc32c_test ...passed 00:06:17.319 Test: dif_update_crc32c_split_test ...passed 00:06:17.319 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:06:17.319 Test: get_range_with_md_test ...passed 00:06:17.319 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:06:17.319 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:06:17.319 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:06:17.319 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:06:17.319 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:06:17.319 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:06:17.319 Test: dif_generate_and_verify_unmap_test ...passed 00:06:17.319 00:06:17.319 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.319 suites 1 1 n/a 0 0 00:06:17.319 tests 79 79 79 0 0 00:06:17.319 asserts 3584 3584 3584 0 n/a 00:06:17.319 00:06:17.319 Elapsed time = 0.290 seconds 00:06:17.319 07:52:23 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:06:17.319 00:06:17.319 00:06:17.319 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.319 http://cunit.sourceforge.net/ 00:06:17.319 00:06:17.319 00:06:17.319 Suite: iov 00:06:17.319 Test: test_single_iov ...passed 00:06:17.319 Test: test_simple_iov ...passed 00:06:17.319 Test: test_complex_iov ...passed 00:06:17.319 Test: test_iovs_to_buf ...passed 00:06:17.319 Test: test_buf_to_iovs ...passed 00:06:17.319 Test: test_memset ...passed 00:06:17.319 Test: test_iov_one ...passed 00:06:17.319 Test: test_iov_xfer ...passed 00:06:17.319 00:06:17.319 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.319 suites 1 1 n/a 0 0 00:06:17.319 tests 8 8 8 0 0 00:06:17.319 asserts 156 156 156 0 n/a 00:06:17.319 00:06:17.319 Elapsed time = 0.000 seconds 00:06:17.319 07:52:23 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:06:17.319 00:06:17.319 00:06:17.319 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.319 http://cunit.sourceforge.net/ 00:06:17.319 00:06:17.319 00:06:17.319 Suite: math 00:06:17.319 Test: test_serial_number_arithmetic ...passed 00:06:17.319 Suite: erase 00:06:17.319 Test: test_memset_s ...passed 00:06:17.319 00:06:17.319 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.319 suites 2 2 n/a 0 0 00:06:17.319 tests 2 2 2 0 0 00:06:17.319 asserts 18 18 18 0 n/a 00:06:17.319 00:06:17.319 Elapsed time = 0.000 seconds 00:06:17.319 07:52:23 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:06:17.319 00:06:17.319 00:06:17.319 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.319 http://cunit.sourceforge.net/ 00:06:17.319 00:06:17.319 00:06:17.319 Suite: pipe 00:06:17.319 Test: test_create_destroy ...passed 00:06:17.319 Test: test_write_get_buffer ...passed 00:06:17.319 Test: test_write_advance ...passed 00:06:17.319 Test: test_read_get_buffer ...passed 00:06:17.319 Test: test_read_advance ...passed 00:06:17.319 Test: test_data ...passed 00:06:17.319 00:06:17.319 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.319 suites 1 1 n/a 0 0 00:06:17.319 tests 6 6 6 0 0 00:06:17.319 asserts 250 250 250 0 n/a 00:06:17.319 00:06:17.319 Elapsed time = 0.000 seconds 00:06:17.319 07:52:23 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:06:17.577 00:06:17.577 00:06:17.577 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.577 http://cunit.sourceforge.net/ 00:06:17.577 00:06:17.577 00:06:17.577 Suite: xor 00:06:17.577 Test: test_xor_gen ...passed 00:06:17.577 00:06:17.577 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.577 suites 1 1 n/a 0 0 00:06:17.577 tests 1 1 1 0 0 00:06:17.577 asserts 17 17 17 0 n/a 00:06:17.577 00:06:17.577 Elapsed time = 0.000 seconds 00:06:17.577 ************************************ 00:06:17.577 END TEST unittest_util 00:06:17.577 ************************************ 00:06:17.577 00:06:17.577 real 0m0.599s 00:06:17.577 user 0m0.437s 00:06:17.577 sys 0m0.167s 00:06:17.577 07:52:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.577 07:52:23 -- common/autotest_common.sh@10 -- # set +x 00:06:17.577 07:52:23 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:17.577 07:52:23 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:06:17.577 07:52:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:17.577 07:52:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.577 07:52:23 -- common/autotest_common.sh@10 -- # set +x 00:06:17.577 ************************************ 00:06:17.577 START TEST unittest_vhost 00:06:17.577 ************************************ 00:06:17.577 07:52:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:06:17.577 00:06:17.577 00:06:17.577 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.577 http://cunit.sourceforge.net/ 00:06:17.577 00:06:17.577 00:06:17.577 Suite: vhost_suite 00:06:17.577 Test: desc_to_iov_test ...passed 00:06:17.577 Test: create_controller_test ...[2024-07-13 07:52:23.214786] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:06:17.577 passed 00:06:17.577 Test: session_find_by_vid_test ...[2024-07-13 07:52:23.217813] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:06:17.578 [2024-07-13 07:52:23.217912] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:06:17.578 [2024-07-13 07:52:23.218003] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:06:17.578 [2024-07-13 07:52:23.218065] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:06:17.578 [2024-07-13 07:52:23.218099] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:06:17.578 [2024-07-13 07:52:23.218301] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-07-13 07:52:23.219061] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:06:17.578 passed 00:06:17.578 Test: remove_controller_test ...passed 00:06:17.578 Test: vq_avail_ring_get_test ...passed 00:06:17.578 Test: vq_packed_ring_test ...passed 00:06:17.578 Test: vhost_blk_construct_test ...[2024-07-13 07:52:23.220630] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:06:17.578 passed 00:06:17.578 00:06:17.578 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.578 suites 1 1 n/a 0 0 00:06:17.578 tests 7 7 7 0 0 00:06:17.578 asserts 145 145 145 0 n/a 00:06:17.578 00:06:17.578 Elapsed time = 0.020 seconds 00:06:17.578 00:06:17.578 real 0m0.047s 00:06:17.578 user 0m0.032s 00:06:17.578 sys 0m0.015s 00:06:17.578 07:52:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.578 07:52:23 -- common/autotest_common.sh@10 -- # set +x 00:06:17.578 ************************************ 00:06:17.578 END TEST unittest_vhost 00:06:17.578 ************************************ 00:06:17.578 07:52:23 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:06:17.578 07:52:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:17.578 07:52:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.578 07:52:23 -- common/autotest_common.sh@10 -- # set +x 00:06:17.578 ************************************ 00:06:17.578 START TEST unittest_dma 00:06:17.578 ************************************ 00:06:17.578 07:52:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:06:17.578 00:06:17.578 00:06:17.578 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.578 http://cunit.sourceforge.net/ 00:06:17.578 00:06:17.578 00:06:17.578 Suite: dma_suite 00:06:17.578 Test: test_dma ...passed 00:06:17.578 00:06:17.578 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.578 suites 1 1 n/a 0 0 00:06:17.578 tests 1 1 1 0 0 00:06:17.578 asserts 50 50 50 0 n/a 00:06:17.578 00:06:17.578 Elapsed time = 0.000 seconds 00:06:17.578 [2024-07-13 07:52:23.316053] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:06:17.578 ************************************ 00:06:17.578 END TEST unittest_dma 00:06:17.578 ************************************ 00:06:17.578 00:06:17.578 real 0m0.033s 00:06:17.578 user 0m0.015s 00:06:17.578 sys 0m0.018s 00:06:17.578 07:52:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.578 07:52:23 -- common/autotest_common.sh@10 -- # set +x 00:06:17.578 07:52:23 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:06:17.578 07:52:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:17.578 07:52:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.578 07:52:23 -- common/autotest_common.sh@10 -- # set +x 00:06:17.578 ************************************ 00:06:17.578 START TEST unittest_init 00:06:17.578 ************************************ 00:06:17.578 07:52:23 -- common/autotest_common.sh@1104 -- # unittest_init 00:06:17.578 07:52:23 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:06:17.835 00:06:17.835 00:06:17.835 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.835 http://cunit.sourceforge.net/ 00:06:17.835 00:06:17.835 00:06:17.835 Suite: subsystem_suite 00:06:17.835 Test: subsystem_sort_test_depends_on_single ...passed 00:06:17.835 Test: subsystem_sort_test_depends_on_multiple ...passed 00:06:17.835 Test: subsystem_sort_test_missing_dependency ...[2024-07-13 07:52:23.403059] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:06:17.835 passed 00:06:17.835 00:06:17.835 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.835 suites 1 1 n/a 0 0 00:06:17.835 tests 3 3 3 0 0 00:06:17.835 asserts 20 20 20 0 n/a 00:06:17.835 00:06:17.835 Elapsed time = 0.000 seconds 00:06:17.835 [2024-07-13 07:52:23.403292] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:06:17.835 00:06:17.835 real 0m0.031s 00:06:17.835 user 0m0.016s 00:06:17.835 sys 0m0.016s 00:06:17.835 07:52:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.835 ************************************ 00:06:17.835 END TEST unittest_init 00:06:17.835 ************************************ 00:06:17.835 07:52:23 -- common/autotest_common.sh@10 -- # set +x 00:06:17.835 07:52:23 -- unit/unittest.sh@289 -- # '[' yes = yes ']' 00:06:17.835 07:52:23 -- unit/unittest.sh@289 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:17.835 07:52:23 -- unit/unittest.sh@290 -- # hostname 00:06:17.835 07:52:23 -- unit/unittest.sh@290 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t centos7-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:06:17.835 geninfo: WARNING: invalid characters removed from testname! 00:06:49.900 07:52:50 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:06:49.900 07:52:54 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:51.290 07:52:56 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:53.230 07:52:58 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:55.764 07:53:01 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:57.678 07:53:03 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:59.581 07:53:05 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:02.115 07:53:07 -- unit/unittest.sh@298 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:07:02.115 07:53:07 -- unit/unittest.sh@299 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:07:02.374 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:02.374 Found 308 entries. 00:07:02.374 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:07:02.374 Writing .css and .png files. 00:07:02.374 Generating output. 00:07:02.374 Processing file include/linux/virtio_ring.h 00:07:02.633 Processing file include/spdk/util.h 00:07:02.633 Processing file include/spdk/endian.h 00:07:02.633 Processing file include/spdk/thread.h 00:07:02.633 Processing file include/spdk/nvme.h 00:07:02.633 Processing file include/spdk/histogram_data.h 00:07:02.633 Processing file include/spdk/nvme_spec.h 00:07:02.633 Processing file include/spdk/bdev_module.h 00:07:02.633 Processing file include/spdk/trace.h 00:07:02.633 Processing file include/spdk/mmio.h 00:07:02.633 Processing file include/spdk/nvmf_transport.h 00:07:02.633 Processing file include/spdk/base64.h 00:07:02.633 Processing file include/spdk_internal/rdma.h 00:07:02.633 Processing file include/spdk_internal/nvme_tcp.h 00:07:02.633 Processing file include/spdk_internal/sock.h 00:07:02.633 Processing file include/spdk_internal/utf.h 00:07:02.633 Processing file include/spdk_internal/sgl.h 00:07:02.633 Processing file include/spdk_internal/virtio.h 00:07:02.891 Processing file lib/accel/accel_sw.c 00:07:02.891 Processing file lib/accel/accel.c 00:07:02.891 Processing file lib/accel/accel_rpc.c 00:07:03.149 Processing file lib/bdev/bdev.c 00:07:03.149 Processing file lib/bdev/bdev_zone.c 00:07:03.149 Processing file lib/bdev/part.c 00:07:03.149 Processing file lib/bdev/bdev_rpc.c 00:07:03.149 Processing file lib/bdev/scsi_nvme.c 00:07:03.408 Processing file lib/blob/blob_bs_dev.c 00:07:03.408 Processing file lib/blob/blobstore.h 00:07:03.408 Processing file lib/blob/request.c 00:07:03.408 Processing file lib/blob/blobstore.c 00:07:03.408 Processing file lib/blob/zeroes.c 00:07:03.408 Processing file lib/blobfs/blobfs.c 00:07:03.408 Processing file lib/blobfs/tree.c 00:07:03.408 Processing file lib/conf/conf.c 00:07:03.408 Processing file lib/dma/dma.c 00:07:03.668 Processing file lib/env_dpdk/pci_virtio.c 00:07:03.668 Processing file lib/env_dpdk/pci_event.c 00:07:03.668 Processing file lib/env_dpdk/pci_vmd.c 00:07:03.668 Processing file lib/env_dpdk/pci_dpdk.c 00:07:03.668 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:07:03.668 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:07:03.668 Processing file lib/env_dpdk/pci_ioat.c 00:07:03.668 Processing file lib/env_dpdk/sigbus_handler.c 00:07:03.668 Processing file lib/env_dpdk/threads.c 00:07:03.668 Processing file lib/env_dpdk/pci_idxd.c 00:07:03.668 Processing file lib/env_dpdk/memory.c 00:07:03.668 Processing file lib/env_dpdk/pci.c 00:07:03.668 Processing file lib/env_dpdk/init.c 00:07:03.668 Processing file lib/env_dpdk/env.c 00:07:03.668 Processing file lib/event/app_rpc.c 00:07:03.668 Processing file lib/event/reactor.c 00:07:03.668 Processing file lib/event/app.c 00:07:03.668 Processing file lib/event/scheduler_static.c 00:07:03.668 Processing file lib/event/log_rpc.c 00:07:04.271 Processing file lib/ftl/ftl_debug.h 00:07:04.271 Processing file lib/ftl/ftl_debug.c 00:07:04.271 Processing file lib/ftl/ftl_core.c 00:07:04.271 Processing file lib/ftl/ftl_io.c 00:07:04.271 Processing file lib/ftl/ftl_core.h 00:07:04.271 Processing file lib/ftl/ftl_io.h 00:07:04.271 Processing file lib/ftl/ftl_band.h 00:07:04.271 Processing file lib/ftl/ftl_writer.c 00:07:04.271 Processing file lib/ftl/ftl_band.c 00:07:04.271 Processing file lib/ftl/ftl_trace.c 00:07:04.271 Processing file lib/ftl/ftl_writer.h 00:07:04.271 Processing file lib/ftl/ftl_sb.c 00:07:04.271 Processing file lib/ftl/ftl_p2l.c 00:07:04.271 Processing file lib/ftl/ftl_rq.c 00:07:04.271 Processing file lib/ftl/ftl_band_ops.c 00:07:04.271 Processing file lib/ftl/ftl_init.c 00:07:04.271 Processing file lib/ftl/ftl_nv_cache_io.h 00:07:04.271 Processing file lib/ftl/ftl_nv_cache.c 00:07:04.271 Processing file lib/ftl/ftl_nv_cache.h 00:07:04.271 Processing file lib/ftl/ftl_l2p_flat.c 00:07:04.271 Processing file lib/ftl/ftl_l2p.c 00:07:04.271 Processing file lib/ftl/ftl_reloc.c 00:07:04.271 Processing file lib/ftl/ftl_l2p_cache.c 00:07:04.271 Processing file lib/ftl/ftl_layout.c 00:07:04.271 Processing file lib/ftl/base/ftl_base_bdev.c 00:07:04.271 Processing file lib/ftl/base/ftl_base_dev.c 00:07:04.271 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:07:04.271 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:07:04.271 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:07:04.271 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:07:04.271 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:07:04.271 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:07:04.271 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:07:04.271 Processing file lib/ftl/mngt/ftl_mngt.c 00:07:04.271 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:07:04.271 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:07:04.271 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:07:04.271 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:07:04.271 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:07:04.530 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:07:04.530 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:07:04.530 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:07:04.530 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:07:04.530 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:07:04.530 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:07:04.530 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:07:04.530 Processing file lib/ftl/utils/ftl_property.h 00:07:04.530 Processing file lib/ftl/utils/ftl_bitmap.c 00:07:04.530 Processing file lib/ftl/utils/ftl_conf.c 00:07:04.530 Processing file lib/ftl/utils/ftl_df.h 00:07:04.530 Processing file lib/ftl/utils/ftl_md.c 00:07:04.530 Processing file lib/ftl/utils/ftl_addr_utils.h 00:07:04.530 Processing file lib/ftl/utils/ftl_mempool.c 00:07:04.530 Processing file lib/ftl/utils/ftl_property.c 00:07:04.788 Processing file lib/idxd/idxd.c 00:07:04.788 Processing file lib/idxd/idxd_user.c 00:07:04.788 Processing file lib/idxd/idxd_internal.h 00:07:04.788 Processing file lib/init/subsystem_rpc.c 00:07:04.788 Processing file lib/init/rpc.c 00:07:04.788 Processing file lib/init/json_config.c 00:07:04.788 Processing file lib/init/subsystem.c 00:07:04.788 Processing file lib/ioat/ioat_internal.h 00:07:04.788 Processing file lib/ioat/ioat.c 00:07:05.355 Processing file lib/iscsi/init_grp.c 00:07:05.355 Processing file lib/iscsi/task.h 00:07:05.355 Processing file lib/iscsi/iscsi_subsystem.c 00:07:05.355 Processing file lib/iscsi/conn.c 00:07:05.355 Processing file lib/iscsi/tgt_node.c 00:07:05.355 Processing file lib/iscsi/iscsi_rpc.c 00:07:05.355 Processing file lib/iscsi/portal_grp.c 00:07:05.355 Processing file lib/iscsi/iscsi.h 00:07:05.355 Processing file lib/iscsi/param.c 00:07:05.355 Processing file lib/iscsi/iscsi.c 00:07:05.355 Processing file lib/iscsi/md5.c 00:07:05.355 Processing file lib/iscsi/task.c 00:07:05.355 Processing file lib/json/json_parse.c 00:07:05.355 Processing file lib/json/json_util.c 00:07:05.355 Processing file lib/json/json_write.c 00:07:05.355 Processing file lib/jsonrpc/jsonrpc_server.c 00:07:05.355 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:07:05.355 Processing file lib/jsonrpc/jsonrpc_client.c 00:07:05.355 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:07:05.614 Processing file lib/log/log_flags.c 00:07:05.614 Processing file lib/log/log_deprecated.c 00:07:05.614 Processing file lib/log/log.c 00:07:05.614 Processing file lib/lvol/lvol.c 00:07:05.614 Processing file lib/nbd/nbd.c 00:07:05.614 Processing file lib/nbd/nbd_rpc.c 00:07:05.614 Processing file lib/notify/notify_rpc.c 00:07:05.614 Processing file lib/notify/notify.c 00:07:06.550 Processing file lib/nvme/nvme_cuse.c 00:07:06.550 Processing file lib/nvme/nvme_ctrlr.c 00:07:06.550 Processing file lib/nvme/nvme_poll_group.c 00:07:06.550 Processing file lib/nvme/nvme_ns_cmd.c 00:07:06.550 Processing file lib/nvme/nvme_tcp.c 00:07:06.550 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:07:06.550 Processing file lib/nvme/nvme_discovery.c 00:07:06.550 Processing file lib/nvme/nvme_vfio_user.c 00:07:06.550 Processing file lib/nvme/nvme_fabric.c 00:07:06.550 Processing file lib/nvme/nvme_opal.c 00:07:06.550 Processing file lib/nvme/nvme_transport.c 00:07:06.550 Processing file lib/nvme/nvme_ns.c 00:07:06.550 Processing file lib/nvme/nvme_pcie_common.c 00:07:06.550 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:07:06.550 Processing file lib/nvme/nvme_io_msg.c 00:07:06.550 Processing file lib/nvme/nvme_pcie_internal.h 00:07:06.550 Processing file lib/nvme/nvme.c 00:07:06.550 Processing file lib/nvme/nvme_pcie.c 00:07:06.550 Processing file lib/nvme/nvme_internal.h 00:07:06.550 Processing file lib/nvme/nvme_zns.c 00:07:06.550 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:07:06.550 Processing file lib/nvme/nvme_rdma.c 00:07:06.550 Processing file lib/nvme/nvme_qpair.c 00:07:06.550 Processing file lib/nvme/nvme_quirks.c 00:07:06.809 Processing file lib/nvmf/nvmf.c 00:07:06.809 Processing file lib/nvmf/nvmf_internal.h 00:07:06.809 Processing file lib/nvmf/nvmf_rpc.c 00:07:06.809 Processing file lib/nvmf/ctrlr.c 00:07:06.809 Processing file lib/nvmf/subsystem.c 00:07:06.809 Processing file lib/nvmf/tcp.c 00:07:06.809 Processing file lib/nvmf/transport.c 00:07:06.809 Processing file lib/nvmf/ctrlr_bdev.c 00:07:06.809 Processing file lib/nvmf/rdma.c 00:07:06.809 Processing file lib/nvmf/ctrlr_discovery.c 00:07:06.809 Processing file lib/rdma/common.c 00:07:06.809 Processing file lib/rdma/rdma_verbs.c 00:07:07.068 Processing file lib/rpc/rpc.c 00:07:07.068 Processing file lib/scsi/port.c 00:07:07.068 Processing file lib/scsi/scsi_bdev.c 00:07:07.068 Processing file lib/scsi/lun.c 00:07:07.068 Processing file lib/scsi/scsi_pr.c 00:07:07.068 Processing file lib/scsi/task.c 00:07:07.068 Processing file lib/scsi/dev.c 00:07:07.068 Processing file lib/scsi/scsi.c 00:07:07.068 Processing file lib/scsi/scsi_rpc.c 00:07:07.326 Processing file lib/sock/sock_rpc.c 00:07:07.326 Processing file lib/sock/sock.c 00:07:07.326 Processing file lib/thread/thread.c 00:07:07.326 Processing file lib/thread/iobuf.c 00:07:07.326 Processing file lib/trace/trace_rpc.c 00:07:07.326 Processing file lib/trace/trace_flags.c 00:07:07.326 Processing file lib/trace/trace.c 00:07:07.584 Processing file lib/trace_parser/trace.cpp 00:07:07.584 Processing file lib/ut/ut.c 00:07:07.584 Processing file lib/ut_mock/mock.c 00:07:07.843 Processing file lib/util/string.c 00:07:07.843 Processing file lib/util/strerror_tls.c 00:07:07.843 Processing file lib/util/hexlify.c 00:07:07.843 Processing file lib/util/uuid.c 00:07:07.843 Processing file lib/util/fd_group.c 00:07:07.843 Processing file lib/util/crc16.c 00:07:07.843 Processing file lib/util/xor.c 00:07:07.843 Processing file lib/util/math.c 00:07:07.843 Processing file lib/util/dif.c 00:07:07.843 Processing file lib/util/bit_array.c 00:07:07.843 Processing file lib/util/fd.c 00:07:07.843 Processing file lib/util/iov.c 00:07:07.843 Processing file lib/util/crc64.c 00:07:07.843 Processing file lib/util/cpuset.c 00:07:07.843 Processing file lib/util/zipf.c 00:07:07.843 Processing file lib/util/crc32.c 00:07:07.843 Processing file lib/util/crc32c.c 00:07:07.843 Processing file lib/util/crc32_ieee.c 00:07:07.843 Processing file lib/util/file.c 00:07:07.843 Processing file lib/util/pipe.c 00:07:07.843 Processing file lib/util/base64.c 00:07:07.843 Processing file lib/vfio_user/host/vfio_user_pci.c 00:07:07.843 Processing file lib/vfio_user/host/vfio_user.c 00:07:08.100 Processing file lib/vhost/rte_vhost_user.c 00:07:08.100 Processing file lib/vhost/vhost_rpc.c 00:07:08.100 Processing file lib/vhost/vhost_blk.c 00:07:08.100 Processing file lib/vhost/vhost_scsi.c 00:07:08.100 Processing file lib/vhost/vhost.c 00:07:08.100 Processing file lib/vhost/vhost_internal.h 00:07:08.100 Processing file lib/virtio/virtio_vfio_user.c 00:07:08.100 Processing file lib/virtio/virtio.c 00:07:08.100 Processing file lib/virtio/virtio_pci.c 00:07:08.100 Processing file lib/virtio/virtio_vhost_user.c 00:07:08.100 Processing file lib/vmd/vmd.c 00:07:08.100 Processing file lib/vmd/led.c 00:07:08.100 Processing file module/accel/dsa/accel_dsa.c 00:07:08.100 Processing file module/accel/dsa/accel_dsa_rpc.c 00:07:08.358 Processing file module/accel/error/accel_error_rpc.c 00:07:08.358 Processing file module/accel/error/accel_error.c 00:07:08.358 Processing file module/accel/iaa/accel_iaa.c 00:07:08.358 Processing file module/accel/iaa/accel_iaa_rpc.c 00:07:08.358 Processing file module/accel/ioat/accel_ioat.c 00:07:08.358 Processing file module/accel/ioat/accel_ioat_rpc.c 00:07:08.358 Processing file module/bdev/aio/bdev_aio.c 00:07:08.358 Processing file module/bdev/aio/bdev_aio_rpc.c 00:07:08.616 Processing file module/bdev/daos/bdev_daos_rpc.c 00:07:08.616 Processing file module/bdev/daos/bdev_daos.c 00:07:08.616 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:07:08.616 Processing file module/bdev/delay/vbdev_delay.c 00:07:08.616 Processing file module/bdev/error/vbdev_error_rpc.c 00:07:08.616 Processing file module/bdev/error/vbdev_error.c 00:07:08.616 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:07:08.616 Processing file module/bdev/ftl/bdev_ftl.c 00:07:08.874 Processing file module/bdev/gpt/vbdev_gpt.c 00:07:08.874 Processing file module/bdev/gpt/gpt.c 00:07:08.874 Processing file module/bdev/gpt/gpt.h 00:07:08.874 Processing file module/bdev/lvol/vbdev_lvol.c 00:07:08.874 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:07:08.874 Processing file module/bdev/malloc/bdev_malloc.c 00:07:08.874 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:07:08.874 Processing file module/bdev/null/bdev_null_rpc.c 00:07:08.874 Processing file module/bdev/null/bdev_null.c 00:07:09.442 Processing file module/bdev/nvme/bdev_mdns_client.c 00:07:09.442 Processing file module/bdev/nvme/bdev_nvme.c 00:07:09.442 Processing file module/bdev/nvme/vbdev_opal.c 00:07:09.442 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:07:09.442 Processing file module/bdev/nvme/nvme_rpc.c 00:07:09.442 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:07:09.442 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:07:09.442 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:07:09.442 Processing file module/bdev/passthru/vbdev_passthru.c 00:07:09.442 Processing file module/bdev/raid/raid0.c 00:07:09.442 Processing file module/bdev/raid/bdev_raid_rpc.c 00:07:09.442 Processing file module/bdev/raid/bdev_raid.h 00:07:09.442 Processing file module/bdev/raid/concat.c 00:07:09.442 Processing file module/bdev/raid/raid1.c 00:07:09.442 Processing file module/bdev/raid/bdev_raid_sb.c 00:07:09.442 Processing file module/bdev/raid/bdev_raid.c 00:07:09.700 Processing file module/bdev/split/vbdev_split.c 00:07:09.700 Processing file module/bdev/split/vbdev_split_rpc.c 00:07:09.700 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:07:09.700 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:07:09.700 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:07:09.700 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:07:09.700 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:07:09.958 Processing file module/blob/bdev/blob_bdev.c 00:07:09.958 Processing file module/blobfs/bdev/blobfs_bdev.c 00:07:09.958 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:07:09.958 Processing file module/env_dpdk/env_dpdk_rpc.c 00:07:09.958 Processing file module/event/subsystems/accel/accel.c 00:07:09.958 Processing file module/event/subsystems/bdev/bdev.c 00:07:09.958 Processing file module/event/subsystems/iobuf/iobuf.c 00:07:09.958 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:07:10.217 Processing file module/event/subsystems/iscsi/iscsi.c 00:07:10.217 Processing file module/event/subsystems/nbd/nbd.c 00:07:10.217 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:07:10.217 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:07:10.217 Processing file module/event/subsystems/scheduler/scheduler.c 00:07:10.217 Processing file module/event/subsystems/scsi/scsi.c 00:07:10.217 Processing file module/event/subsystems/sock/sock.c 00:07:10.475 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:07:10.475 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:07:10.475 Processing file module/event/subsystems/vmd/vmd.c 00:07:10.475 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:07:10.475 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:07:10.475 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:07:10.475 Processing file module/scheduler/gscheduler/gscheduler.c 00:07:10.733 Processing file module/sock/sock_kernel.h 00:07:10.733 Processing file module/sock/posix/posix.c 00:07:10.733 Writing directory view page. 00:07:10.733 Overall coverage rate: 00:07:10.733 lines......: 38.7% (38558 of 99573 lines) 00:07:10.733 functions..: 42.4% (3531 of 8324 functions) 00:07:10.733 00:07:10.733 00:07:10.733 ===================== 00:07:10.733 All unit tests passed 00:07:10.733 ===================== 00:07:10.733 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:07:10.733 07:53:16 -- unit/unittest.sh@302 -- # set +x 00:07:10.733 00:07:10.733 00:07:10.733 ************************************ 00:07:10.733 END TEST unittest 00:07:10.733 ************************************ 00:07:10.733 00:07:10.733 real 2m14.384s 00:07:10.733 user 1m54.020s 00:07:10.733 sys 0m12.312s 00:07:10.733 07:53:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.733 07:53:16 -- common/autotest_common.sh@10 -- # set +x 00:07:10.992 07:53:16 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:07:10.992 07:53:16 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:07:10.992 07:53:16 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:07:10.992 07:53:16 -- spdk/autotest.sh@173 -- # timing_enter lib 00:07:10.992 07:53:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:10.992 07:53:16 -- common/autotest_common.sh@10 -- # set +x 00:07:10.992 07:53:16 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:10.992 07:53:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:10.992 07:53:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.992 07:53:16 -- common/autotest_common.sh@10 -- # set +x 00:07:10.992 ************************************ 00:07:10.992 START TEST env 00:07:10.992 ************************************ 00:07:10.992 07:53:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:10.992 * Looking for test storage... 00:07:10.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:10.992 07:53:16 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:10.992 07:53:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:10.992 07:53:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.992 07:53:16 -- common/autotest_common.sh@10 -- # set +x 00:07:10.992 ************************************ 00:07:10.992 START TEST env_memory 00:07:10.992 ************************************ 00:07:10.992 07:53:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:10.992 00:07:10.992 00:07:10.992 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.992 http://cunit.sourceforge.net/ 00:07:10.992 00:07:10.992 00:07:10.992 Suite: memory 00:07:10.992 Test: alloc and free memory map ...[2024-07-13 07:53:16.706860] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:10.992 passed 00:07:10.992 Test: mem map translation ...[2024-07-13 07:53:16.741142] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:10.992 [2024-07-13 07:53:16.741300] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:10.992 [2024-07-13 07:53:16.741378] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:10.992 [2024-07-13 07:53:16.741466] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:10.992 passed 00:07:10.992 Test: mem map registration ...[2024-07-13 07:53:16.785386] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:07:10.992 [2024-07-13 07:53:16.785534] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:07:11.251 passed 00:07:11.251 Test: mem map adjacent registrations ...passed 00:07:11.251 00:07:11.251 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.251 suites 1 1 n/a 0 0 00:07:11.251 tests 4 4 4 0 0 00:07:11.251 asserts 152 152 152 0 n/a 00:07:11.251 00:07:11.251 Elapsed time = 0.160 seconds 00:07:11.251 00:07:11.251 real 0m0.197s 00:07:11.251 user 0m0.178s 00:07:11.251 sys 0m0.020s 00:07:11.251 07:53:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.251 07:53:16 -- common/autotest_common.sh@10 -- # set +x 00:07:11.251 ************************************ 00:07:11.251 END TEST env_memory 00:07:11.251 ************************************ 00:07:11.251 07:53:16 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:11.251 07:53:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:11.251 07:53:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.251 07:53:16 -- common/autotest_common.sh@10 -- # set +x 00:07:11.251 ************************************ 00:07:11.251 START TEST env_vtophys 00:07:11.251 ************************************ 00:07:11.251 07:53:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:11.251 EAL: lib.eal log level changed from notice to debug 00:07:11.251 EAL: Detected lcore 0 as core 0 on socket 0 00:07:11.251 EAL: Detected lcore 1 as core 0 on socket 0 00:07:11.251 EAL: Detected lcore 2 as core 0 on socket 0 00:07:11.251 EAL: Detected lcore 3 as core 0 on socket 0 00:07:11.251 EAL: Detected lcore 4 as core 0 on socket 0 00:07:11.251 EAL: Detected lcore 5 as core 0 on socket 0 00:07:11.251 EAL: Detected lcore 6 as core 0 on socket 0 00:07:11.251 EAL: Detected lcore 7 as core 0 on socket 0 00:07:11.251 EAL: Detected lcore 8 as core 0 on socket 0 00:07:11.251 EAL: Detected lcore 9 as core 0 on socket 0 00:07:11.251 EAL: Maximum logical cores by configuration: 128 00:07:11.251 EAL: Detected CPU lcores: 10 00:07:11.251 EAL: Detected NUMA nodes: 1 00:07:11.251 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:07:11.251 EAL: Checking presence of .so 'librte_eal.so.23' 00:07:11.251 EAL: Checking presence of .so 'librte_eal.so' 00:07:11.251 EAL: Detected static linkage of DPDK 00:07:11.509 EAL: No shared files mode enabled, IPC will be disabled 00:07:11.509 EAL: Selected IOVA mode 'PA' 00:07:11.509 EAL: Probing VFIO support... 00:07:11.509 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:11.509 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:11.509 EAL: Ask a virtual area of 0x2e000 bytes 00:07:11.509 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:11.509 EAL: Setting up physically contiguous memory... 00:07:11.509 EAL: Setting maximum number of open files to 4096 00:07:11.509 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:11.509 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:11.509 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.509 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:11.509 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:11.509 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.509 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:11.509 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:11.509 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.509 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:11.509 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:11.509 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.509 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:11.509 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:11.509 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.509 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:11.509 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:11.509 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.509 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:11.509 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:11.509 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.509 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:11.509 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:11.509 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.509 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:11.509 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:11.509 EAL: Hugepages will be freed exactly as allocated. 00:07:11.509 EAL: No shared files mode enabled, IPC is disabled 00:07:11.509 EAL: No shared files mode enabled, IPC is disabled 00:07:11.509 EAL: TSC frequency is ~2100000 KHz 00:07:11.509 EAL: Main lcore 0 is ready (tid=7f82ae81d180;cpuset=[0]) 00:07:11.509 EAL: Trying to obtain current memory policy. 00:07:11.509 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.509 EAL: Restoring previous memory policy: 0 00:07:11.509 EAL: request: mp_malloc_sync 00:07:11.509 EAL: No shared files mode enabled, IPC is disabled 00:07:11.509 EAL: Heap on socket 0 was expanded by 2MB 00:07:11.509 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:11.509 EAL: Mem event callback 'spdk:(nil)' registered 00:07:11.509 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:11.509 00:07:11.509 00:07:11.509 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.509 http://cunit.sourceforge.net/ 00:07:11.509 00:07:11.509 00:07:11.509 Suite: components_suite 00:07:12.076 Test: vtophys_malloc_test ...passed 00:07:12.076 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:12.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:12.076 EAL: Restoring previous memory policy: 0 00:07:12.076 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.076 EAL: request: mp_malloc_sync 00:07:12.076 EAL: No shared files mode enabled, IPC is disabled 00:07:12.076 EAL: Heap on socket 0 was expanded by 4MB 00:07:12.076 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.076 EAL: request: mp_malloc_sync 00:07:12.076 EAL: No shared files mode enabled, IPC is disabled 00:07:12.076 EAL: Heap on socket 0 was shrunk by 4MB 00:07:12.076 EAL: Trying to obtain current memory policy. 00:07:12.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:12.076 EAL: Restoring previous memory policy: 0 00:07:12.076 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.076 EAL: request: mp_malloc_sync 00:07:12.076 EAL: No shared files mode enabled, IPC is disabled 00:07:12.076 EAL: Heap on socket 0 was expanded by 6MB 00:07:12.076 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.076 EAL: request: mp_malloc_sync 00:07:12.076 EAL: No shared files mode enabled, IPC is disabled 00:07:12.076 EAL: Heap on socket 0 was shrunk by 6MB 00:07:12.076 EAL: Trying to obtain current memory policy. 00:07:12.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:12.076 EAL: Restoring previous memory policy: 0 00:07:12.076 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.076 EAL: request: mp_malloc_sync 00:07:12.076 EAL: No shared files mode enabled, IPC is disabled 00:07:12.076 EAL: Heap on socket 0 was expanded by 10MB 00:07:12.076 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.076 EAL: request: mp_malloc_sync 00:07:12.076 EAL: No shared files mode enabled, IPC is disabled 00:07:12.076 EAL: Heap on socket 0 was shrunk by 10MB 00:07:12.076 EAL: Trying to obtain current memory policy. 00:07:12.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:12.076 EAL: Restoring previous memory policy: 0 00:07:12.076 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.076 EAL: request: mp_malloc_sync 00:07:12.076 EAL: No shared files mode enabled, IPC is disabled 00:07:12.076 EAL: Heap on socket 0 was expanded by 18MB 00:07:12.076 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.076 EAL: request: mp_malloc_sync 00:07:12.076 EAL: No shared files mode enabled, IPC is disabled 00:07:12.076 EAL: Heap on socket 0 was shrunk by 18MB 00:07:12.076 EAL: Trying to obtain current memory policy. 00:07:12.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:12.076 EAL: Restoring previous memory policy: 0 00:07:12.076 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.076 EAL: request: mp_malloc_sync 00:07:12.076 EAL: No shared files mode enabled, IPC is disabled 00:07:12.076 EAL: Heap on socket 0 was expanded by 34MB 00:07:12.076 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.076 EAL: request: mp_malloc_sync 00:07:12.076 EAL: No shared files mode enabled, IPC is disabled 00:07:12.076 EAL: Heap on socket 0 was shrunk by 34MB 00:07:12.076 EAL: Trying to obtain current memory policy. 00:07:12.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:12.076 EAL: Restoring previous memory policy: 0 00:07:12.076 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.076 EAL: request: mp_malloc_sync 00:07:12.076 EAL: No shared files mode enabled, IPC is disabled 00:07:12.076 EAL: Heap on socket 0 was expanded by 66MB 00:07:12.076 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.076 EAL: request: mp_malloc_sync 00:07:12.076 EAL: No shared files mode enabled, IPC is disabled 00:07:12.076 EAL: Heap on socket 0 was shrunk by 66MB 00:07:12.076 EAL: Trying to obtain current memory policy. 00:07:12.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:12.076 EAL: Restoring previous memory policy: 0 00:07:12.076 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.076 EAL: request: mp_malloc_sync 00:07:12.076 EAL: No shared files mode enabled, IPC is disabled 00:07:12.076 EAL: Heap on socket 0 was expanded by 130MB 00:07:12.076 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.076 EAL: request: mp_malloc_sync 00:07:12.076 EAL: No shared files mode enabled, IPC is disabled 00:07:12.076 EAL: Heap on socket 0 was shrunk by 130MB 00:07:12.076 EAL: Trying to obtain current memory policy. 00:07:12.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:12.076 EAL: Restoring previous memory policy: 0 00:07:12.076 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.076 EAL: request: mp_malloc_sync 00:07:12.076 EAL: No shared files mode enabled, IPC is disabled 00:07:12.076 EAL: Heap on socket 0 was expanded by 258MB 00:07:12.076 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.334 EAL: request: mp_malloc_sync 00:07:12.334 EAL: No shared files mode enabled, IPC is disabled 00:07:12.334 EAL: Heap on socket 0 was shrunk by 258MB 00:07:12.334 EAL: Trying to obtain current memory policy. 00:07:12.334 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:12.334 EAL: Restoring previous memory policy: 0 00:07:12.334 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.334 EAL: request: mp_malloc_sync 00:07:12.334 EAL: No shared files mode enabled, IPC is disabled 00:07:12.334 EAL: Heap on socket 0 was expanded by 514MB 00:07:12.334 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.592 EAL: request: mp_malloc_sync 00:07:12.592 EAL: No shared files mode enabled, IPC is disabled 00:07:12.592 EAL: Heap on socket 0 was shrunk by 514MB 00:07:12.593 EAL: Trying to obtain current memory policy. 00:07:12.593 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:12.593 EAL: Restoring previous memory policy: 0 00:07:12.593 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.593 EAL: request: mp_malloc_sync 00:07:12.593 EAL: No shared files mode enabled, IPC is disabled 00:07:12.593 EAL: Heap on socket 0 was expanded by 1026MB 00:07:12.851 EAL: Calling mem event callback 'spdk:(nil)' 00:07:13.109 passed 00:07:13.109 00:07:13.109 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.109 suites 1 1 n/a 0 0 00:07:13.109 tests 2 2 2 0 0 00:07:13.109 asserts 6669 6669 6669 0 n/a 00:07:13.109 00:07:13.109 Elapsed time = 1.470 seconds 00:07:13.109 EAL: request: mp_malloc_sync 00:07:13.109 EAL: No shared files mode enabled, IPC is disabled 00:07:13.109 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:13.109 EAL: Calling mem event callback 'spdk:(nil)' 00:07:13.109 EAL: request: mp_malloc_sync 00:07:13.109 EAL: No shared files mode enabled, IPC is disabled 00:07:13.109 EAL: Heap on socket 0 was shrunk by 2MB 00:07:13.109 EAL: No shared files mode enabled, IPC is disabled 00:07:13.109 EAL: No shared files mode enabled, IPC is disabled 00:07:13.109 EAL: No shared files mode enabled, IPC is disabled 00:07:13.109 ************************************ 00:07:13.109 END TEST env_vtophys 00:07:13.109 ************************************ 00:07:13.109 00:07:13.109 real 0m1.788s 00:07:13.109 user 0m0.734s 00:07:13.109 sys 0m0.849s 00:07:13.109 07:53:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.109 07:53:18 -- common/autotest_common.sh@10 -- # set +x 00:07:13.109 07:53:18 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:13.109 07:53:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:13.109 07:53:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.109 07:53:18 -- common/autotest_common.sh@10 -- # set +x 00:07:13.109 ************************************ 00:07:13.109 START TEST env_pci 00:07:13.109 ************************************ 00:07:13.109 07:53:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:13.109 00:07:13.109 00:07:13.109 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.109 http://cunit.sourceforge.net/ 00:07:13.109 00:07:13.109 00:07:13.109 Suite: pci 00:07:13.109 Test: pci_hook ...[2024-07-13 07:53:18.771524] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 51930 has claimed it 00:07:13.109 passed 00:07:13.109 00:07:13.109 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.109 suites 1 1 n/a 0 0 00:07:13.109 tests 1 1 1 0 0 00:07:13.109 asserts 25 25 25 0 n/a 00:07:13.109 00:07:13.109 Elapsed time = 0.000 seconds 00:07:13.109 EAL: Cannot find device (10000:00:01.0) 00:07:13.109 EAL: Failed to attach device on primary process 00:07:13.109 ************************************ 00:07:13.109 END TEST env_pci 00:07:13.109 ************************************ 00:07:13.109 00:07:13.109 real 0m0.057s 00:07:13.109 user 0m0.027s 00:07:13.109 sys 0m0.031s 00:07:13.109 07:53:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.109 07:53:18 -- common/autotest_common.sh@10 -- # set +x 00:07:13.109 07:53:18 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:13.109 07:53:18 -- env/env.sh@15 -- # uname 00:07:13.109 07:53:18 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:13.109 07:53:18 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:13.109 07:53:18 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:13.109 07:53:18 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:13.109 07:53:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.109 07:53:18 -- common/autotest_common.sh@10 -- # set +x 00:07:13.109 ************************************ 00:07:13.109 START TEST env_dpdk_post_init 00:07:13.109 ************************************ 00:07:13.109 07:53:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:13.372 EAL: Detected CPU lcores: 10 00:07:13.372 EAL: Detected NUMA nodes: 1 00:07:13.372 EAL: Detected static linkage of DPDK 00:07:13.372 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:13.372 EAL: Selected IOVA mode 'PA' 00:07:13.372 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:13.372 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket 0) 00:07:13.372 Starting DPDK initialization... 00:07:13.372 Starting SPDK post initialization... 00:07:13.372 SPDK NVMe probe 00:07:13.372 Attaching to 0000:00:06.0 00:07:13.372 Attached to 0000:00:06.0 00:07:13.372 Cleaning up... 00:07:13.372 00:07:13.372 real 0m0.296s 00:07:13.372 user 0m0.044s 00:07:13.372 sys 0m0.051s 00:07:13.372 07:53:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.372 ************************************ 00:07:13.372 END TEST env_dpdk_post_init 00:07:13.372 ************************************ 00:07:13.372 07:53:19 -- common/autotest_common.sh@10 -- # set +x 00:07:13.641 07:53:19 -- env/env.sh@26 -- # uname 00:07:13.641 07:53:19 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:13.641 07:53:19 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:13.641 07:53:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:13.641 07:53:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.641 07:53:19 -- common/autotest_common.sh@10 -- # set +x 00:07:13.641 ************************************ 00:07:13.641 START TEST env_mem_callbacks 00:07:13.641 ************************************ 00:07:13.641 07:53:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:13.641 EAL: Detected CPU lcores: 10 00:07:13.641 EAL: Detected NUMA nodes: 1 00:07:13.641 EAL: Detected static linkage of DPDK 00:07:13.641 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:13.641 EAL: Selected IOVA mode 'PA' 00:07:13.641 00:07:13.641 00:07:13.641 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.641 http://cunit.sourceforge.net/ 00:07:13.641 00:07:13.641 00:07:13.641 Suite: memory 00:07:13.641 Test: test ... 00:07:13.641 register 0x200000200000 2097152 00:07:13.641 malloc 3145728 00:07:13.641 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:13.641 register 0x200000400000 4194304 00:07:13.641 buf 0x200000500000 len 3145728 PASSED 00:07:13.641 malloc 64 00:07:13.641 buf 0x2000004fff40 len 64 PASSED 00:07:13.641 malloc 4194304 00:07:13.641 register 0x200000800000 6291456 00:07:13.641 buf 0x200000a00000 len 4194304 PASSED 00:07:13.641 free 0x200000500000 3145728 00:07:13.641 free 0x2000004fff40 64 00:07:13.641 unregister 0x200000400000 4194304 PASSED 00:07:13.641 free 0x200000a00000 4194304 00:07:13.641 unregister 0x200000800000 6291456 PASSED 00:07:13.641 malloc 8388608 00:07:13.641 register 0x200000400000 10485760 00:07:13.641 buf 0x200000600000 len 8388608 PASSED 00:07:13.641 free 0x200000600000 8388608 00:07:13.641 unregister 0x200000400000 10485760 PASSED 00:07:13.641 passed 00:07:13.641 00:07:13.641 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.641 suites 1 1 n/a 0 0 00:07:13.641 tests 1 1 1 0 0 00:07:13.642 asserts 15 15 15 0 n/a 00:07:13.642 00:07:13.642 Elapsed time = 0.010 seconds 00:07:13.642 ************************************ 00:07:13.642 END TEST env_mem_callbacks 00:07:13.642 ************************************ 00:07:13.642 00:07:13.642 real 0m0.181s 00:07:13.642 user 0m0.032s 00:07:13.642 sys 0m0.048s 00:07:13.642 07:53:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.642 07:53:19 -- common/autotest_common.sh@10 -- # set +x 00:07:13.642 ************************************ 00:07:13.642 END TEST env 00:07:13.642 ************************************ 00:07:13.642 00:07:13.642 real 0m2.856s 00:07:13.642 user 0m1.155s 00:07:13.642 sys 0m1.192s 00:07:13.642 07:53:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.642 07:53:19 -- common/autotest_common.sh@10 -- # set +x 00:07:13.900 07:53:19 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:13.900 07:53:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:13.900 07:53:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.900 07:53:19 -- common/autotest_common.sh@10 -- # set +x 00:07:13.900 ************************************ 00:07:13.900 START TEST rpc 00:07:13.900 ************************************ 00:07:13.900 07:53:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:13.900 * Looking for test storage... 00:07:13.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:13.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.900 07:53:19 -- rpc/rpc.sh@65 -- # spdk_pid=52069 00:07:13.900 07:53:19 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:13.900 07:53:19 -- rpc/rpc.sh@67 -- # waitforlisten 52069 00:07:13.900 07:53:19 -- common/autotest_common.sh@819 -- # '[' -z 52069 ']' 00:07:13.900 07:53:19 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:13.900 07:53:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.900 07:53:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:13.900 07:53:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.900 07:53:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:13.900 07:53:19 -- common/autotest_common.sh@10 -- # set +x 00:07:14.158 [2024-07-13 07:53:19.720822] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:14.158 [2024-07-13 07:53:19.721095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52069 ] 00:07:14.158 [2024-07-13 07:53:19.868975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.158 [2024-07-13 07:53:19.923896] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:14.158 [2024-07-13 07:53:19.924170] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:14.158 [2024-07-13 07:53:19.924218] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 52069' to capture a snapshot of events at runtime. 00:07:14.158 [2024-07-13 07:53:19.924249] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid52069 for offline analysis/debug. 00:07:14.158 [2024-07-13 07:53:19.924330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.725 07:53:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:14.725 07:53:20 -- common/autotest_common.sh@852 -- # return 0 00:07:14.725 07:53:20 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:14.725 07:53:20 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:14.725 07:53:20 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:14.725 07:53:20 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:14.725 07:53:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:14.725 07:53:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.725 07:53:20 -- common/autotest_common.sh@10 -- # set +x 00:07:14.986 ************************************ 00:07:14.986 START TEST rpc_integrity 00:07:14.986 ************************************ 00:07:14.986 07:53:20 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:07:14.986 07:53:20 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:14.986 07:53:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:14.986 07:53:20 -- common/autotest_common.sh@10 -- # set +x 00:07:14.986 07:53:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:14.986 07:53:20 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:14.986 07:53:20 -- rpc/rpc.sh@13 -- # jq length 00:07:14.986 07:53:20 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:14.986 07:53:20 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:14.986 07:53:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:14.986 07:53:20 -- common/autotest_common.sh@10 -- # set +x 00:07:14.986 07:53:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:14.986 07:53:20 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:14.986 07:53:20 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:14.986 07:53:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:14.986 07:53:20 -- common/autotest_common.sh@10 -- # set +x 00:07:14.986 07:53:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:14.986 07:53:20 -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:14.986 { 00:07:14.986 "name": "Malloc0", 00:07:14.986 "aliases": [ 00:07:14.986 "60997c90-f3f2-45b8-9969-556b3c7d8e39" 00:07:14.986 ], 00:07:14.986 "product_name": "Malloc disk", 00:07:14.986 "block_size": 512, 00:07:14.986 "num_blocks": 16384, 00:07:14.986 "uuid": "60997c90-f3f2-45b8-9969-556b3c7d8e39", 00:07:14.986 "assigned_rate_limits": { 00:07:14.986 "rw_ios_per_sec": 0, 00:07:14.986 "rw_mbytes_per_sec": 0, 00:07:14.986 "r_mbytes_per_sec": 0, 00:07:14.986 "w_mbytes_per_sec": 0 00:07:14.986 }, 00:07:14.986 "claimed": false, 00:07:14.986 "zoned": false, 00:07:14.986 "supported_io_types": { 00:07:14.986 "read": true, 00:07:14.986 "write": true, 00:07:14.986 "unmap": true, 00:07:14.986 "write_zeroes": true, 00:07:14.986 "flush": true, 00:07:14.986 "reset": true, 00:07:14.986 "compare": false, 00:07:14.986 "compare_and_write": false, 00:07:14.986 "abort": true, 00:07:14.986 "nvme_admin": false, 00:07:14.986 "nvme_io": false 00:07:14.986 }, 00:07:14.986 "memory_domains": [ 00:07:14.986 { 00:07:14.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.986 "dma_device_type": 2 00:07:14.986 } 00:07:14.986 ], 00:07:14.986 "driver_specific": {} 00:07:14.986 } 00:07:14.986 ]' 00:07:14.986 07:53:20 -- rpc/rpc.sh@17 -- # jq length 00:07:14.986 07:53:20 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:14.986 07:53:20 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:14.986 07:53:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:14.986 07:53:20 -- common/autotest_common.sh@10 -- # set +x 00:07:14.986 [2024-07-13 07:53:20.690905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:14.986 [2024-07-13 07:53:20.690984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.986 [2024-07-13 07:53:20.691047] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:07:14.986 [2024-07-13 07:53:20.691077] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.986 [2024-07-13 07:53:20.693024] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.986 [2024-07-13 07:53:20.693077] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:14.986 Passthru0 00:07:14.986 07:53:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:14.986 07:53:20 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:14.986 07:53:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:14.986 07:53:20 -- common/autotest_common.sh@10 -- # set +x 00:07:14.986 07:53:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:14.986 07:53:20 -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:14.986 { 00:07:14.986 "name": "Malloc0", 00:07:14.986 "aliases": [ 00:07:14.986 "60997c90-f3f2-45b8-9969-556b3c7d8e39" 00:07:14.986 ], 00:07:14.987 "product_name": "Malloc disk", 00:07:14.987 "block_size": 512, 00:07:14.987 "num_blocks": 16384, 00:07:14.987 "uuid": "60997c90-f3f2-45b8-9969-556b3c7d8e39", 00:07:14.987 "assigned_rate_limits": { 00:07:14.987 "rw_ios_per_sec": 0, 00:07:14.987 "rw_mbytes_per_sec": 0, 00:07:14.987 "r_mbytes_per_sec": 0, 00:07:14.987 "w_mbytes_per_sec": 0 00:07:14.987 }, 00:07:14.987 "claimed": true, 00:07:14.987 "claim_type": "exclusive_write", 00:07:14.987 "zoned": false, 00:07:14.987 "supported_io_types": { 00:07:14.987 "read": true, 00:07:14.987 "write": true, 00:07:14.987 "unmap": true, 00:07:14.987 "write_zeroes": true, 00:07:14.987 "flush": true, 00:07:14.987 "reset": true, 00:07:14.987 "compare": false, 00:07:14.987 "compare_and_write": false, 00:07:14.987 "abort": true, 00:07:14.987 "nvme_admin": false, 00:07:14.987 "nvme_io": false 00:07:14.987 }, 00:07:14.987 "memory_domains": [ 00:07:14.987 { 00:07:14.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.987 "dma_device_type": 2 00:07:14.987 } 00:07:14.987 ], 00:07:14.987 "driver_specific": {} 00:07:14.987 }, 00:07:14.987 { 00:07:14.987 "name": "Passthru0", 00:07:14.987 "aliases": [ 00:07:14.987 "7a422e5b-6595-561b-9996-ab1706499c9b" 00:07:14.987 ], 00:07:14.987 "product_name": "passthru", 00:07:14.987 "block_size": 512, 00:07:14.987 "num_blocks": 16384, 00:07:14.987 "uuid": "7a422e5b-6595-561b-9996-ab1706499c9b", 00:07:14.987 "assigned_rate_limits": { 00:07:14.987 "rw_ios_per_sec": 0, 00:07:14.987 "rw_mbytes_per_sec": 0, 00:07:14.987 "r_mbytes_per_sec": 0, 00:07:14.987 "w_mbytes_per_sec": 0 00:07:14.987 }, 00:07:14.987 "claimed": false, 00:07:14.987 "zoned": false, 00:07:14.987 "supported_io_types": { 00:07:14.987 "read": true, 00:07:14.987 "write": true, 00:07:14.987 "unmap": true, 00:07:14.987 "write_zeroes": true, 00:07:14.987 "flush": true, 00:07:14.987 "reset": true, 00:07:14.987 "compare": false, 00:07:14.987 "compare_and_write": false, 00:07:14.987 "abort": true, 00:07:14.987 "nvme_admin": false, 00:07:14.987 "nvme_io": false 00:07:14.987 }, 00:07:14.987 "memory_domains": [ 00:07:14.987 { 00:07:14.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.987 "dma_device_type": 2 00:07:14.987 } 00:07:14.987 ], 00:07:14.987 "driver_specific": { 00:07:14.987 "passthru": { 00:07:14.987 "name": "Passthru0", 00:07:14.987 "base_bdev_name": "Malloc0" 00:07:14.987 } 00:07:14.987 } 00:07:14.987 } 00:07:14.987 ]' 00:07:14.987 07:53:20 -- rpc/rpc.sh@21 -- # jq length 00:07:14.987 07:53:20 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:14.987 07:53:20 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:14.987 07:53:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:14.987 07:53:20 -- common/autotest_common.sh@10 -- # set +x 00:07:14.987 07:53:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:14.987 07:53:20 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:14.987 07:53:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:14.987 07:53:20 -- common/autotest_common.sh@10 -- # set +x 00:07:14.987 07:53:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:14.987 07:53:20 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:14.987 07:53:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:14.987 07:53:20 -- common/autotest_common.sh@10 -- # set +x 00:07:14.987 07:53:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:14.987 07:53:20 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:14.987 07:53:20 -- rpc/rpc.sh@26 -- # jq length 00:07:15.246 07:53:20 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:15.246 00:07:15.246 real 0m0.312s 00:07:15.246 user 0m0.216s 00:07:15.246 sys 0m0.038s 00:07:15.246 ************************************ 00:07:15.246 END TEST rpc_integrity 00:07:15.246 ************************************ 00:07:15.246 07:53:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.246 07:53:20 -- common/autotest_common.sh@10 -- # set +x 00:07:15.246 07:53:20 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:15.246 07:53:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:15.246 07:53:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.246 07:53:20 -- common/autotest_common.sh@10 -- # set +x 00:07:15.246 ************************************ 00:07:15.246 START TEST rpc_plugins 00:07:15.246 ************************************ 00:07:15.246 07:53:20 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:07:15.246 07:53:20 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:15.246 07:53:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:15.246 07:53:20 -- common/autotest_common.sh@10 -- # set +x 00:07:15.246 07:53:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:15.246 07:53:20 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:15.246 07:53:20 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:15.246 07:53:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:15.246 07:53:20 -- common/autotest_common.sh@10 -- # set +x 00:07:15.246 07:53:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:15.246 07:53:20 -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:15.246 { 00:07:15.246 "name": "Malloc1", 00:07:15.246 "aliases": [ 00:07:15.246 "e431c172-d96d-450c-bdd0-27db9caf476f" 00:07:15.246 ], 00:07:15.246 "product_name": "Malloc disk", 00:07:15.246 "block_size": 4096, 00:07:15.246 "num_blocks": 256, 00:07:15.246 "uuid": "e431c172-d96d-450c-bdd0-27db9caf476f", 00:07:15.246 "assigned_rate_limits": { 00:07:15.246 "rw_ios_per_sec": 0, 00:07:15.246 "rw_mbytes_per_sec": 0, 00:07:15.246 "r_mbytes_per_sec": 0, 00:07:15.246 "w_mbytes_per_sec": 0 00:07:15.246 }, 00:07:15.246 "claimed": false, 00:07:15.246 "zoned": false, 00:07:15.246 "supported_io_types": { 00:07:15.246 "read": true, 00:07:15.246 "write": true, 00:07:15.246 "unmap": true, 00:07:15.246 "write_zeroes": true, 00:07:15.246 "flush": true, 00:07:15.246 "reset": true, 00:07:15.246 "compare": false, 00:07:15.246 "compare_and_write": false, 00:07:15.246 "abort": true, 00:07:15.246 "nvme_admin": false, 00:07:15.246 "nvme_io": false 00:07:15.246 }, 00:07:15.246 "memory_domains": [ 00:07:15.246 { 00:07:15.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.246 "dma_device_type": 2 00:07:15.246 } 00:07:15.246 ], 00:07:15.246 "driver_specific": {} 00:07:15.246 } 00:07:15.246 ]' 00:07:15.246 07:53:20 -- rpc/rpc.sh@32 -- # jq length 00:07:15.246 07:53:20 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:15.246 07:53:20 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:15.246 07:53:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:15.246 07:53:20 -- common/autotest_common.sh@10 -- # set +x 00:07:15.246 07:53:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:15.246 07:53:21 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:15.246 07:53:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:15.246 07:53:21 -- common/autotest_common.sh@10 -- # set +x 00:07:15.246 07:53:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:15.246 07:53:21 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:15.246 07:53:21 -- rpc/rpc.sh@36 -- # jq length 00:07:15.506 07:53:21 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:15.506 00:07:15.506 real 0m0.167s 00:07:15.506 user 0m0.112s 00:07:15.506 sys 0m0.023s 00:07:15.506 07:53:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.506 07:53:21 -- common/autotest_common.sh@10 -- # set +x 00:07:15.506 ************************************ 00:07:15.506 END TEST rpc_plugins 00:07:15.506 ************************************ 00:07:15.506 07:53:21 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:15.506 07:53:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:15.506 07:53:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.506 07:53:21 -- common/autotest_common.sh@10 -- # set +x 00:07:15.506 ************************************ 00:07:15.506 START TEST rpc_trace_cmd_test 00:07:15.506 ************************************ 00:07:15.506 07:53:21 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:07:15.506 07:53:21 -- rpc/rpc.sh@40 -- # local info 00:07:15.506 07:53:21 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:15.506 07:53:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:15.506 07:53:21 -- common/autotest_common.sh@10 -- # set +x 00:07:15.506 07:53:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:15.506 07:53:21 -- rpc/rpc.sh@42 -- # info='{ 00:07:15.506 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid52069", 00:07:15.506 "tpoint_group_mask": "0x8", 00:07:15.506 "iscsi_conn": { 00:07:15.506 "mask": "0x2", 00:07:15.506 "tpoint_mask": "0x0" 00:07:15.506 }, 00:07:15.506 "scsi": { 00:07:15.506 "mask": "0x4", 00:07:15.506 "tpoint_mask": "0x0" 00:07:15.506 }, 00:07:15.506 "bdev": { 00:07:15.506 "mask": "0x8", 00:07:15.506 "tpoint_mask": "0xffffffffffffffff" 00:07:15.506 }, 00:07:15.506 "nvmf_rdma": { 00:07:15.506 "mask": "0x10", 00:07:15.506 "tpoint_mask": "0x0" 00:07:15.506 }, 00:07:15.506 "nvmf_tcp": { 00:07:15.506 "mask": "0x20", 00:07:15.506 "tpoint_mask": "0x0" 00:07:15.506 }, 00:07:15.506 "ftl": { 00:07:15.506 "mask": "0x40", 00:07:15.506 "tpoint_mask": "0x0" 00:07:15.506 }, 00:07:15.506 "blobfs": { 00:07:15.506 "mask": "0x80", 00:07:15.506 "tpoint_mask": "0x0" 00:07:15.506 }, 00:07:15.506 "dsa": { 00:07:15.506 "mask": "0x200", 00:07:15.506 "tpoint_mask": "0x0" 00:07:15.506 }, 00:07:15.506 "thread": { 00:07:15.506 "mask": "0x400", 00:07:15.506 "tpoint_mask": "0x0" 00:07:15.506 }, 00:07:15.506 "nvme_pcie": { 00:07:15.506 "mask": "0x800", 00:07:15.506 "tpoint_mask": "0x0" 00:07:15.506 }, 00:07:15.506 "iaa": { 00:07:15.506 "mask": "0x1000", 00:07:15.506 "tpoint_mask": "0x0" 00:07:15.506 }, 00:07:15.506 "nvme_tcp": { 00:07:15.506 "mask": "0x2000", 00:07:15.506 "tpoint_mask": "0x0" 00:07:15.506 }, 00:07:15.506 "bdev_nvme": { 00:07:15.506 "mask": "0x4000", 00:07:15.506 "tpoint_mask": "0x0" 00:07:15.506 } 00:07:15.506 }' 00:07:15.506 07:53:21 -- rpc/rpc.sh@43 -- # jq length 00:07:15.506 07:53:21 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:07:15.506 07:53:21 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:15.506 07:53:21 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:15.506 07:53:21 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:15.506 07:53:21 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:15.506 07:53:21 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:15.765 07:53:21 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:15.765 07:53:21 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:15.765 ************************************ 00:07:15.765 END TEST rpc_trace_cmd_test 00:07:15.765 ************************************ 00:07:15.765 07:53:21 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:15.765 00:07:15.765 real 0m0.283s 00:07:15.765 user 0m0.246s 00:07:15.765 sys 0m0.032s 00:07:15.765 07:53:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.765 07:53:21 -- common/autotest_common.sh@10 -- # set +x 00:07:15.765 07:53:21 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:15.765 07:53:21 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:15.765 07:53:21 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:15.765 07:53:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:15.765 07:53:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.765 07:53:21 -- common/autotest_common.sh@10 -- # set +x 00:07:15.765 ************************************ 00:07:15.765 START TEST rpc_daemon_integrity 00:07:15.765 ************************************ 00:07:15.765 07:53:21 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:07:15.765 07:53:21 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:15.765 07:53:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:15.765 07:53:21 -- common/autotest_common.sh@10 -- # set +x 00:07:15.766 07:53:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:15.766 07:53:21 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:15.766 07:53:21 -- rpc/rpc.sh@13 -- # jq length 00:07:15.766 07:53:21 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:15.766 07:53:21 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:15.766 07:53:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:15.766 07:53:21 -- common/autotest_common.sh@10 -- # set +x 00:07:15.766 07:53:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:15.766 07:53:21 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:15.766 07:53:21 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:15.766 07:53:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:15.766 07:53:21 -- common/autotest_common.sh@10 -- # set +x 00:07:15.766 07:53:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:15.766 07:53:21 -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:15.766 { 00:07:15.766 "name": "Malloc2", 00:07:15.766 "aliases": [ 00:07:15.766 "0e63938d-b5a3-4bc4-b359-475f3a2d669f" 00:07:15.766 ], 00:07:15.766 "product_name": "Malloc disk", 00:07:15.766 "block_size": 512, 00:07:15.766 "num_blocks": 16384, 00:07:15.766 "uuid": "0e63938d-b5a3-4bc4-b359-475f3a2d669f", 00:07:15.766 "assigned_rate_limits": { 00:07:15.766 "rw_ios_per_sec": 0, 00:07:15.766 "rw_mbytes_per_sec": 0, 00:07:15.766 "r_mbytes_per_sec": 0, 00:07:15.766 "w_mbytes_per_sec": 0 00:07:15.766 }, 00:07:15.766 "claimed": false, 00:07:15.766 "zoned": false, 00:07:15.766 "supported_io_types": { 00:07:15.766 "read": true, 00:07:15.766 "write": true, 00:07:15.766 "unmap": true, 00:07:15.766 "write_zeroes": true, 00:07:15.766 "flush": true, 00:07:15.766 "reset": true, 00:07:15.766 "compare": false, 00:07:15.766 "compare_and_write": false, 00:07:15.766 "abort": true, 00:07:15.766 "nvme_admin": false, 00:07:15.766 "nvme_io": false 00:07:15.766 }, 00:07:15.766 "memory_domains": [ 00:07:15.766 { 00:07:15.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.766 "dma_device_type": 2 00:07:15.766 } 00:07:15.766 ], 00:07:15.766 "driver_specific": {} 00:07:15.766 } 00:07:15.766 ]' 00:07:15.766 07:53:21 -- rpc/rpc.sh@17 -- # jq length 00:07:16.025 07:53:21 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:16.025 07:53:21 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:16.025 07:53:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.025 07:53:21 -- common/autotest_common.sh@10 -- # set +x 00:07:16.025 [2024-07-13 07:53:21.611162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:16.025 [2024-07-13 07:53:21.611240] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.025 [2024-07-13 07:53:21.611295] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029780 00:07:16.025 [2024-07-13 07:53:21.611319] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.025 [2024-07-13 07:53:21.613226] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.025 [2024-07-13 07:53:21.613299] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:16.025 Passthru0 00:07:16.025 07:53:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.025 07:53:21 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:16.025 07:53:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.025 07:53:21 -- common/autotest_common.sh@10 -- # set +x 00:07:16.025 07:53:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.025 07:53:21 -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:16.025 { 00:07:16.025 "name": "Malloc2", 00:07:16.025 "aliases": [ 00:07:16.025 "0e63938d-b5a3-4bc4-b359-475f3a2d669f" 00:07:16.025 ], 00:07:16.025 "product_name": "Malloc disk", 00:07:16.025 "block_size": 512, 00:07:16.025 "num_blocks": 16384, 00:07:16.025 "uuid": "0e63938d-b5a3-4bc4-b359-475f3a2d669f", 00:07:16.025 "assigned_rate_limits": { 00:07:16.025 "rw_ios_per_sec": 0, 00:07:16.025 "rw_mbytes_per_sec": 0, 00:07:16.025 "r_mbytes_per_sec": 0, 00:07:16.025 "w_mbytes_per_sec": 0 00:07:16.025 }, 00:07:16.025 "claimed": true, 00:07:16.025 "claim_type": "exclusive_write", 00:07:16.025 "zoned": false, 00:07:16.025 "supported_io_types": { 00:07:16.025 "read": true, 00:07:16.025 "write": true, 00:07:16.025 "unmap": true, 00:07:16.025 "write_zeroes": true, 00:07:16.025 "flush": true, 00:07:16.025 "reset": true, 00:07:16.025 "compare": false, 00:07:16.025 "compare_and_write": false, 00:07:16.025 "abort": true, 00:07:16.025 "nvme_admin": false, 00:07:16.025 "nvme_io": false 00:07:16.025 }, 00:07:16.025 "memory_domains": [ 00:07:16.025 { 00:07:16.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.025 "dma_device_type": 2 00:07:16.025 } 00:07:16.025 ], 00:07:16.025 "driver_specific": {} 00:07:16.025 }, 00:07:16.025 { 00:07:16.025 "name": "Passthru0", 00:07:16.025 "aliases": [ 00:07:16.025 "f8406898-cd1e-5acf-990b-d5fb7dc8a63d" 00:07:16.025 ], 00:07:16.025 "product_name": "passthru", 00:07:16.025 "block_size": 512, 00:07:16.025 "num_blocks": 16384, 00:07:16.025 "uuid": "f8406898-cd1e-5acf-990b-d5fb7dc8a63d", 00:07:16.025 "assigned_rate_limits": { 00:07:16.025 "rw_ios_per_sec": 0, 00:07:16.025 "rw_mbytes_per_sec": 0, 00:07:16.025 "r_mbytes_per_sec": 0, 00:07:16.025 "w_mbytes_per_sec": 0 00:07:16.025 }, 00:07:16.025 "claimed": false, 00:07:16.025 "zoned": false, 00:07:16.025 "supported_io_types": { 00:07:16.025 "read": true, 00:07:16.025 "write": true, 00:07:16.025 "unmap": true, 00:07:16.025 "write_zeroes": true, 00:07:16.025 "flush": true, 00:07:16.025 "reset": true, 00:07:16.025 "compare": false, 00:07:16.025 "compare_and_write": false, 00:07:16.025 "abort": true, 00:07:16.025 "nvme_admin": false, 00:07:16.025 "nvme_io": false 00:07:16.025 }, 00:07:16.025 "memory_domains": [ 00:07:16.025 { 00:07:16.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.025 "dma_device_type": 2 00:07:16.025 } 00:07:16.025 ], 00:07:16.025 "driver_specific": { 00:07:16.025 "passthru": { 00:07:16.025 "name": "Passthru0", 00:07:16.025 "base_bdev_name": "Malloc2" 00:07:16.025 } 00:07:16.025 } 00:07:16.025 } 00:07:16.025 ]' 00:07:16.025 07:53:21 -- rpc/rpc.sh@21 -- # jq length 00:07:16.025 07:53:21 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:16.025 07:53:21 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:16.025 07:53:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.025 07:53:21 -- common/autotest_common.sh@10 -- # set +x 00:07:16.025 07:53:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.025 07:53:21 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:16.025 07:53:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.025 07:53:21 -- common/autotest_common.sh@10 -- # set +x 00:07:16.025 07:53:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.025 07:53:21 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:16.025 07:53:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.026 07:53:21 -- common/autotest_common.sh@10 -- # set +x 00:07:16.026 07:53:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.026 07:53:21 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:16.026 07:53:21 -- rpc/rpc.sh@26 -- # jq length 00:07:16.026 ************************************ 00:07:16.026 END TEST rpc_daemon_integrity 00:07:16.026 ************************************ 00:07:16.026 07:53:21 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:16.026 00:07:16.026 real 0m0.330s 00:07:16.026 user 0m0.225s 00:07:16.026 sys 0m0.043s 00:07:16.026 07:53:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.026 07:53:21 -- common/autotest_common.sh@10 -- # set +x 00:07:16.026 07:53:21 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:16.026 07:53:21 -- rpc/rpc.sh@84 -- # killprocess 52069 00:07:16.026 07:53:21 -- common/autotest_common.sh@926 -- # '[' -z 52069 ']' 00:07:16.026 07:53:21 -- common/autotest_common.sh@930 -- # kill -0 52069 00:07:16.026 07:53:21 -- common/autotest_common.sh@931 -- # uname 00:07:16.026 07:53:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:16.026 07:53:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 52069 00:07:16.285 07:53:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:16.285 killing process with pid 52069 00:07:16.285 07:53:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:16.285 07:53:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 52069' 00:07:16.285 07:53:21 -- common/autotest_common.sh@945 -- # kill 52069 00:07:16.285 07:53:21 -- common/autotest_common.sh@950 -- # wait 52069 00:07:16.543 00:07:16.543 real 0m2.711s 00:07:16.543 user 0m3.404s 00:07:16.543 sys 0m0.708s 00:07:16.543 07:53:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.543 07:53:22 -- common/autotest_common.sh@10 -- # set +x 00:07:16.543 ************************************ 00:07:16.543 END TEST rpc 00:07:16.543 ************************************ 00:07:16.543 07:53:22 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:16.543 07:53:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:16.543 07:53:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.543 07:53:22 -- common/autotest_common.sh@10 -- # set +x 00:07:16.543 ************************************ 00:07:16.543 START TEST rpc_client 00:07:16.543 ************************************ 00:07:16.543 07:53:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:16.543 * Looking for test storage... 00:07:16.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:16.543 07:53:22 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:16.802 OK 00:07:16.802 07:53:22 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:16.802 ************************************ 00:07:16.802 END TEST rpc_client 00:07:16.802 ************************************ 00:07:16.802 00:07:16.802 real 0m0.246s 00:07:16.802 user 0m0.071s 00:07:16.802 sys 0m0.075s 00:07:16.802 07:53:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.802 07:53:22 -- common/autotest_common.sh@10 -- # set +x 00:07:16.802 07:53:22 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:16.802 07:53:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:16.802 07:53:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.802 07:53:22 -- common/autotest_common.sh@10 -- # set +x 00:07:16.802 ************************************ 00:07:16.802 START TEST json_config 00:07:16.802 ************************************ 00:07:16.802 07:53:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:16.802 07:53:22 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:16.802 07:53:22 -- nvmf/common.sh@7 -- # uname -s 00:07:16.802 07:53:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:16.802 07:53:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:16.802 07:53:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:16.802 07:53:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:16.802 07:53:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:16.802 07:53:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:16.802 07:53:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:16.802 07:53:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:16.802 07:53:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:16.802 07:53:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:16.802 07:53:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd955a38-6530-4f65-92b2-440d83f734fd 00:07:16.802 07:53:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd955a38-6530-4f65-92b2-440d83f734fd 00:07:16.802 07:53:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:16.802 07:53:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:16.802 07:53:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:16.802 07:53:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:16.802 07:53:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.802 07:53:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.802 07:53:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.802 07:53:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:16.802 07:53:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:16.802 07:53:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:16.802 07:53:22 -- paths/export.sh@5 -- # export PATH 00:07:16.802 07:53:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:16.802 07:53:22 -- nvmf/common.sh@46 -- # : 0 00:07:16.802 07:53:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:16.802 07:53:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:16.802 07:53:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:16.802 07:53:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:16.802 07:53:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:16.802 07:53:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:16.802 07:53:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:16.802 07:53:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:17.061 07:53:22 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:07:17.061 07:53:22 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:07:17.061 07:53:22 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:07:17.061 07:53:22 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:17.061 07:53:22 -- json_config/json_config.sh@30 -- # app_pid=([target]="" [initiator]="") 00:07:17.061 07:53:22 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:07:17.061 07:53:22 -- json_config/json_config.sh@31 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock' [initiator]='/var/tmp/spdk_initiator.sock') 00:07:17.061 07:53:22 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:07:17.061 07:53:22 -- json_config/json_config.sh@32 -- # app_params=([target]='-m 0x1 -s 1024' [initiator]='-m 0x2 -g -u -s 1024') 00:07:17.061 07:53:22 -- json_config/json_config.sh@32 -- # declare -A app_params 00:07:17.061 07:53:22 -- json_config/json_config.sh@33 -- # configs_path=([target]="$rootdir/spdk_tgt_config.json" [initiator]="$rootdir/spdk_initiator_config.json") 00:07:17.061 07:53:22 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:07:17.061 07:53:22 -- json_config/json_config.sh@43 -- # last_event_id=0 00:07:17.061 INFO: JSON configuration test init 00:07:17.061 07:53:22 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:17.061 07:53:22 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:07:17.061 07:53:22 -- json_config/json_config.sh@420 -- # json_config_test_init 00:07:17.061 07:53:22 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:07:17.061 07:53:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:17.061 07:53:22 -- common/autotest_common.sh@10 -- # set +x 00:07:17.061 07:53:22 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:07:17.061 07:53:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:17.061 07:53:22 -- common/autotest_common.sh@10 -- # set +x 00:07:17.061 Waiting for target to run... 00:07:17.061 07:53:22 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:07:17.061 07:53:22 -- json_config/json_config.sh@98 -- # local app=target 00:07:17.061 07:53:22 -- json_config/json_config.sh@99 -- # shift 00:07:17.061 07:53:22 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:07:17.061 07:53:22 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:07:17.061 07:53:22 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:07:17.061 07:53:22 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:17.061 07:53:22 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:17.061 07:53:22 -- json_config/json_config.sh@111 -- # app_pid[$app]=52358 00:07:17.061 07:53:22 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:07:17.061 07:53:22 -- json_config/json_config.sh@114 -- # waitforlisten 52358 /var/tmp/spdk_tgt.sock 00:07:17.061 07:53:22 -- common/autotest_common.sh@819 -- # '[' -z 52358 ']' 00:07:17.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:17.061 07:53:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:17.061 07:53:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:17.061 07:53:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:17.061 07:53:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:17.061 07:53:22 -- common/autotest_common.sh@10 -- # set +x 00:07:17.061 07:53:22 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:17.062 [2024-07-13 07:53:22.765407] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:17.062 [2024-07-13 07:53:22.765684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52358 ] 00:07:17.627 [2024-07-13 07:53:23.181195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.627 [2024-07-13 07:53:23.211327] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:17.627 [2024-07-13 07:53:23.211869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.886 00:07:17.886 07:53:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:17.886 07:53:23 -- common/autotest_common.sh@852 -- # return 0 00:07:17.886 07:53:23 -- json_config/json_config.sh@115 -- # echo '' 00:07:17.886 07:53:23 -- json_config/json_config.sh@322 -- # create_accel_config 00:07:17.886 07:53:23 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:07:17.886 07:53:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:17.886 07:53:23 -- common/autotest_common.sh@10 -- # set +x 00:07:17.886 07:53:23 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:07:17.886 07:53:23 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:07:17.886 07:53:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:17.886 07:53:23 -- common/autotest_common.sh@10 -- # set +x 00:07:17.886 07:53:23 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:17.886 07:53:23 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:07:17.886 07:53:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:18.451 07:53:23 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:07:18.451 07:53:23 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:07:18.451 07:53:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:18.451 07:53:23 -- common/autotest_common.sh@10 -- # set +x 00:07:18.451 07:53:23 -- json_config/json_config.sh@48 -- # local ret=0 00:07:18.451 07:53:23 -- json_config/json_config.sh@49 -- # enabled_types=("bdev_register" "bdev_unregister") 00:07:18.451 07:53:23 -- json_config/json_config.sh@49 -- # local enabled_types 00:07:18.451 07:53:23 -- json_config/json_config.sh@51 -- # get_types=($(tgt_rpc notify_get_types | jq -r '.[]')) 00:07:18.451 07:53:23 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:18.451 07:53:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:18.451 07:53:23 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:18.451 07:53:24 -- json_config/json_config.sh@51 -- # local get_types 00:07:18.451 07:53:24 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:07:18.451 07:53:24 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:07:18.451 07:53:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:18.451 07:53:24 -- common/autotest_common.sh@10 -- # set +x 00:07:18.451 07:53:24 -- json_config/json_config.sh@58 -- # return 0 00:07:18.451 07:53:24 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:07:18.451 07:53:24 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:07:18.451 07:53:24 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:07:18.451 07:53:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:18.451 07:53:24 -- common/autotest_common.sh@10 -- # set +x 00:07:18.451 07:53:24 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:07:18.451 07:53:24 -- json_config/json_config.sh@160 -- # local expected_notifications 00:07:18.451 07:53:24 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:07:18.451 07:53:24 -- json_config/json_config.sh@164 -- # get_notifications 00:07:18.451 07:53:24 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:07:18.451 07:53:24 -- json_config/json_config.sh@64 -- # IFS=: 00:07:18.451 07:53:24 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:18.451 07:53:24 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:07:18.451 07:53:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:18.451 07:53:24 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:18.711 07:53:24 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:07:18.711 07:53:24 -- json_config/json_config.sh@64 -- # IFS=: 00:07:18.711 07:53:24 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:18.711 07:53:24 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:07:18.711 07:53:24 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:07:18.711 07:53:24 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:07:18.711 07:53:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:07:18.969 Nvme0n1p0 Nvme0n1p1 00:07:18.969 07:53:24 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:07:18.969 07:53:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:07:19.227 [2024-07-13 07:53:24.804953] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:19.227 [2024-07-13 07:53:24.805036] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:19.227 00:07:19.227 07:53:24 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:07:19.227 07:53:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:07:19.227 Malloc3 00:07:19.227 07:53:25 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:19.227 07:53:25 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:19.537 [2024-07-13 07:53:25.145031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:19.537 [2024-07-13 07:53:25.145121] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.537 [2024-07-13 07:53:25.145179] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000035180 00:07:19.537 [2024-07-13 07:53:25.145206] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.537 [2024-07-13 07:53:25.147116] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.537 [2024-07-13 07:53:25.147181] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:19.537 PTBdevFromMalloc3 00:07:19.537 07:53:25 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:07:19.537 07:53:25 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:07:19.537 Null0 00:07:19.537 07:53:25 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:07:19.537 07:53:25 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:07:19.795 Malloc0 00:07:19.795 07:53:25 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:07:19.796 07:53:25 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:07:20.054 Malloc1 00:07:20.054 07:53:25 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:07:20.054 07:53:25 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:07:20.312 102400+0 records in 00:07:20.312 102400+0 records out 00:07:20.312 104857600 bytes (105 MB) copied, 0.371021 s, 283 MB/s 00:07:20.313 07:53:26 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:07:20.313 07:53:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:07:20.571 aio_disk 00:07:20.571 07:53:26 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:07:20.571 07:53:26 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:20.571 07:53:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:20.829 a898d9e5-278d-4091-b9da-b72272d6f167 00:07:20.829 07:53:26 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:07:20.829 07:53:26 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:07:20.829 07:53:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:07:21.087 07:53:26 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:07:21.087 07:53:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:07:21.087 07:53:26 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:21.087 07:53:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:21.346 07:53:26 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:21.346 07:53:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:21.605 07:53:27 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:07:21.605 07:53:27 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:07:21.605 07:53:27 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:668b0cd3-e4ef-4973-ac6c-2214c674b655 bdev_register:81ae976a-bc38-4c63-acda-8a2c1b361512 bdev_register:17cca706-4fa8-497c-9d31-541cc5ff60bc bdev_register:ffecd1dd-46d3-4a62-b84a-17dd32d4a177 00:07:21.605 07:53:27 -- json_config/json_config.sh@70 -- # local events_to_check 00:07:21.605 07:53:27 -- json_config/json_config.sh@71 -- # local recorded_events 00:07:21.605 07:53:27 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:07:21.605 07:53:27 -- json_config/json_config.sh@74 -- # sort 00:07:21.605 07:53:27 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:668b0cd3-e4ef-4973-ac6c-2214c674b655 bdev_register:81ae976a-bc38-4c63-acda-8a2c1b361512 bdev_register:17cca706-4fa8-497c-9d31-541cc5ff60bc bdev_register:ffecd1dd-46d3-4a62-b84a-17dd32d4a177 00:07:21.605 07:53:27 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:07:21.605 07:53:27 -- json_config/json_config.sh@75 -- # sort 00:07:21.605 07:53:27 -- json_config/json_config.sh@75 -- # get_notifications 00:07:21.605 07:53:27 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:07:21.605 07:53:27 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.605 07:53:27 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.605 07:53:27 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:07:21.605 07:53:27 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:21.605 07:53:27 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:21.863 07:53:27 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:07:21.863 07:53:27 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.863 07:53:27 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.863 07:53:27 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:07:21.863 07:53:27 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.863 07:53:27 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.863 07:53:27 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:07:21.863 07:53:27 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.863 07:53:27 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.863 07:53:27 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:07:21.863 07:53:27 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.863 07:53:27 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.863 07:53:27 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:07:21.863 07:53:27 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.863 07:53:27 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.863 07:53:27 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:07:21.863 07:53:27 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.863 07:53:27 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.863 07:53:27 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:07:21.863 07:53:27 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.863 07:53:27 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.864 07:53:27 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:07:21.864 07:53:27 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.864 07:53:27 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.864 07:53:27 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:07:21.864 07:53:27 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.864 07:53:27 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.864 07:53:27 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:07:21.864 07:53:27 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.864 07:53:27 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.864 07:53:27 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:07:21.864 07:53:27 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.864 07:53:27 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.864 07:53:27 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:07:21.864 07:53:27 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.864 07:53:27 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.864 07:53:27 -- json_config/json_config.sh@65 -- # echo bdev_register:668b0cd3-e4ef-4973-ac6c-2214c674b655 00:07:21.864 07:53:27 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.864 07:53:27 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.864 07:53:27 -- json_config/json_config.sh@65 -- # echo bdev_register:81ae976a-bc38-4c63-acda-8a2c1b361512 00:07:21.864 07:53:27 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.864 07:53:27 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.864 07:53:27 -- json_config/json_config.sh@65 -- # echo bdev_register:17cca706-4fa8-497c-9d31-541cc5ff60bc 00:07:21.864 07:53:27 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.864 07:53:27 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.864 07:53:27 -- json_config/json_config.sh@65 -- # echo bdev_register:ffecd1dd-46d3-4a62-b84a-17dd32d4a177 00:07:21.864 07:53:27 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.864 07:53:27 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.864 07:53:27 -- json_config/json_config.sh@77 -- # [[ bdev_register:17cca706-4fa8-497c-9d31-541cc5ff60bc bdev_register:668b0cd3-e4ef-4973-ac6c-2214c674b655 bdev_register:81ae976a-bc38-4c63-acda-8a2c1b361512 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:ffecd1dd-46d3-4a62-b84a-17dd32d4a177 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\1\7\c\c\a\7\0\6\-\4\f\a\8\-\4\9\7\c\-\9\d\3\1\-\5\4\1\c\c\5\f\f\6\0\b\c\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\6\6\8\b\0\c\d\3\-\e\4\e\f\-\4\9\7\3\-\a\c\6\c\-\2\2\1\4\c\6\7\4\b\6\5\5\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\1\a\e\9\7\6\a\-\b\c\3\8\-\4\c\6\3\-\a\c\d\a\-\8\a\2\c\1\b\3\6\1\5\1\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\f\e\c\d\1\d\d\-\4\6\d\3\-\4\a\6\2\-\b\8\4\a\-\1\7\d\d\3\2\d\4\a\1\7\7 ]] 00:07:21.864 07:53:27 -- json_config/json_config.sh@89 -- # cat 00:07:21.864 07:53:27 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:17cca706-4fa8-497c-9d31-541cc5ff60bc bdev_register:668b0cd3-e4ef-4973-ac6c-2214c674b655 bdev_register:81ae976a-bc38-4c63-acda-8a2c1b361512 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:ffecd1dd-46d3-4a62-b84a-17dd32d4a177 00:07:21.864 Expected events matched: 00:07:21.864 bdev_register:17cca706-4fa8-497c-9d31-541cc5ff60bc 00:07:21.864 bdev_register:668b0cd3-e4ef-4973-ac6c-2214c674b655 00:07:21.864 bdev_register:81ae976a-bc38-4c63-acda-8a2c1b361512 00:07:21.864 bdev_register:Malloc0 00:07:21.864 bdev_register:Malloc0p0 00:07:21.864 bdev_register:Malloc0p1 00:07:21.864 bdev_register:Malloc0p2 00:07:21.864 bdev_register:Malloc1 00:07:21.864 bdev_register:Malloc3 00:07:21.864 bdev_register:Null0 00:07:21.864 bdev_register:Nvme0n1 00:07:21.864 bdev_register:Nvme0n1p0 00:07:21.864 bdev_register:Nvme0n1p1 00:07:21.864 bdev_register:PTBdevFromMalloc3 00:07:21.864 bdev_register:aio_disk 00:07:21.864 bdev_register:ffecd1dd-46d3-4a62-b84a-17dd32d4a177 00:07:21.864 07:53:27 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:07:21.864 07:53:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:21.864 07:53:27 -- common/autotest_common.sh@10 -- # set +x 00:07:21.864 07:53:27 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:07:21.864 07:53:27 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:07:21.864 07:53:27 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:07:21.864 07:53:27 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:07:21.864 07:53:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:21.864 07:53:27 -- common/autotest_common.sh@10 -- # set +x 00:07:21.864 07:53:27 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:07:21.864 07:53:27 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:21.864 07:53:27 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:22.123 MallocBdevForConfigChangeCheck 00:07:22.123 07:53:27 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:07:22.123 07:53:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:22.123 07:53:27 -- common/autotest_common.sh@10 -- # set +x 00:07:22.123 07:53:27 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:07:22.123 07:53:27 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:22.382 INFO: shutting down applications... 00:07:22.382 07:53:27 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:07:22.382 07:53:27 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:07:22.382 07:53:27 -- json_config/json_config.sh@431 -- # json_config_clear target 00:07:22.382 07:53:27 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:07:22.382 07:53:27 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:22.382 [2024-07-13 07:53:28.146842] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:07:22.641 Calling clear_vhost_scsi_subsystem 00:07:22.641 Calling clear_iscsi_subsystem 00:07:22.641 Calling clear_vhost_blk_subsystem 00:07:22.641 Calling clear_nbd_subsystem 00:07:22.641 Calling clear_nvmf_subsystem 00:07:22.641 Calling clear_bdev_subsystem 00:07:22.641 Calling clear_accel_subsystem 00:07:22.641 Calling clear_iobuf_subsystem 00:07:22.641 Calling clear_sock_subsystem 00:07:22.641 Calling clear_vmd_subsystem 00:07:22.641 Calling clear_scheduler_subsystem 00:07:22.641 07:53:28 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:07:22.641 07:53:28 -- json_config/json_config.sh@396 -- # count=100 00:07:22.641 07:53:28 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:07:22.641 07:53:28 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:22.641 07:53:28 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:22.641 07:53:28 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:07:22.900 07:53:28 -- json_config/json_config.sh@398 -- # break 00:07:22.900 07:53:28 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:07:22.900 07:53:28 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:07:22.900 07:53:28 -- json_config/json_config.sh@120 -- # local app=target 00:07:22.900 07:53:28 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:07:22.900 07:53:28 -- json_config/json_config.sh@124 -- # [[ -n 52358 ]] 00:07:22.900 07:53:28 -- json_config/json_config.sh@127 -- # kill -SIGINT 52358 00:07:22.900 07:53:28 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:07:22.900 07:53:28 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:22.900 07:53:28 -- json_config/json_config.sh@130 -- # kill -0 52358 00:07:22.900 07:53:28 -- json_config/json_config.sh@134 -- # sleep 0.5 00:07:23.468 07:53:29 -- json_config/json_config.sh@129 -- # (( i++ )) 00:07:23.468 07:53:29 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:23.468 07:53:29 -- json_config/json_config.sh@130 -- # kill -0 52358 00:07:23.468 07:53:29 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:07:23.468 07:53:29 -- json_config/json_config.sh@132 -- # break 00:07:23.468 SPDK target shutdown done 00:07:23.468 07:53:29 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:07:23.468 07:53:29 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:07:23.468 INFO: relaunching applications... 00:07:23.468 Waiting for target to run... 00:07:23.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:23.468 07:53:29 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:07:23.468 07:53:29 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:23.468 07:53:29 -- json_config/json_config.sh@98 -- # local app=target 00:07:23.468 07:53:29 -- json_config/json_config.sh@99 -- # shift 00:07:23.468 07:53:29 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:07:23.468 07:53:29 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:07:23.468 07:53:29 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:07:23.468 07:53:29 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:23.468 07:53:29 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:23.468 07:53:29 -- json_config/json_config.sh@111 -- # app_pid[$app]=52590 00:07:23.468 07:53:29 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:07:23.468 07:53:29 -- json_config/json_config.sh@114 -- # waitforlisten 52590 /var/tmp/spdk_tgt.sock 00:07:23.468 07:53:29 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:23.468 07:53:29 -- common/autotest_common.sh@819 -- # '[' -z 52590 ']' 00:07:23.468 07:53:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:23.468 07:53:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:23.468 07:53:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:23.468 07:53:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:23.468 07:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:23.468 [2024-07-13 07:53:29.260881] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:23.468 [2024-07-13 07:53:29.261081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52590 ] 00:07:24.035 [2024-07-13 07:53:29.640621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.035 [2024-07-13 07:53:29.667541] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:24.035 [2024-07-13 07:53:29.667757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.035 [2024-07-13 07:53:29.794368] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:24.035 [2024-07-13 07:53:29.794697] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:24.035 [2024-07-13 07:53:29.802354] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:24.035 [2024-07-13 07:53:29.802395] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:24.035 [2024-07-13 07:53:29.810379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:24.035 [2024-07-13 07:53:29.810426] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:24.035 [2024-07-13 07:53:29.810627] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:24.294 [2024-07-13 07:53:29.890650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:24.294 [2024-07-13 07:53:29.890735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.294 [2024-07-13 07:53:29.890778] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000037880 00:07:24.294 [2024-07-13 07:53:29.890805] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.294 [2024-07-13 07:53:29.891112] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.294 [2024-07-13 07:53:29.891143] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:24.294 00:07:24.294 INFO: Checking if target configuration is the same... 00:07:24.294 07:53:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:24.294 07:53:30 -- common/autotest_common.sh@852 -- # return 0 00:07:24.294 07:53:30 -- json_config/json_config.sh@115 -- # echo '' 00:07:24.294 07:53:30 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:07:24.294 07:53:30 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:24.294 07:53:30 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:24.294 07:53:30 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:07:24.294 07:53:30 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:24.294 + '[' 2 -ne 2 ']' 00:07:24.294 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:24.294 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:24.294 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:24.294 +++ basename /dev/fd/62 00:07:24.294 ++ mktemp /tmp/62.XXX 00:07:24.294 + tmp_file_1=/tmp/62.doY 00:07:24.294 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:24.294 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:24.294 + tmp_file_2=/tmp/spdk_tgt_config.json.VWi 00:07:24.294 + ret=0 00:07:24.294 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:24.553 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:24.553 + diff -u /tmp/62.doY /tmp/spdk_tgt_config.json.VWi 00:07:24.553 INFO: JSON config files are the same 00:07:24.553 + echo 'INFO: JSON config files are the same' 00:07:24.553 + rm /tmp/62.doY /tmp/spdk_tgt_config.json.VWi 00:07:24.553 + exit 0 00:07:24.553 INFO: changing configuration and checking if this can be detected... 00:07:24.553 07:53:30 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:07:24.553 07:53:30 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:24.553 07:53:30 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:24.553 07:53:30 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:24.812 07:53:30 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:24.812 07:53:30 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:07:24.812 07:53:30 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:24.812 + '[' 2 -ne 2 ']' 00:07:24.812 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:24.812 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:24.812 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:24.812 +++ basename /dev/fd/62 00:07:24.812 ++ mktemp /tmp/62.XXX 00:07:24.812 + tmp_file_1=/tmp/62.nV2 00:07:24.812 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:24.812 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:24.812 + tmp_file_2=/tmp/spdk_tgt_config.json.ZPo 00:07:24.812 + ret=0 00:07:24.812 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:25.379 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:25.379 + diff -u /tmp/62.nV2 /tmp/spdk_tgt_config.json.ZPo 00:07:25.379 + ret=1 00:07:25.379 + echo '=== Start of file: /tmp/62.nV2 ===' 00:07:25.379 + cat /tmp/62.nV2 00:07:25.379 + echo '=== End of file: /tmp/62.nV2 ===' 00:07:25.379 + echo '' 00:07:25.379 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ZPo ===' 00:07:25.379 + cat /tmp/spdk_tgt_config.json.ZPo 00:07:25.379 + echo '=== End of file: /tmp/spdk_tgt_config.json.ZPo ===' 00:07:25.379 + echo '' 00:07:25.379 + rm /tmp/62.nV2 /tmp/spdk_tgt_config.json.ZPo 00:07:25.379 + exit 1 00:07:25.379 INFO: configuration change detected. 00:07:25.379 07:53:31 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:07:25.379 07:53:31 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:07:25.379 07:53:31 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:07:25.379 07:53:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:25.379 07:53:31 -- common/autotest_common.sh@10 -- # set +x 00:07:25.379 07:53:31 -- json_config/json_config.sh@360 -- # local ret=0 00:07:25.379 07:53:31 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:07:25.379 07:53:31 -- json_config/json_config.sh@370 -- # [[ -n 52590 ]] 00:07:25.379 07:53:31 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:07:25.379 07:53:31 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:07:25.379 07:53:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:25.379 07:53:31 -- common/autotest_common.sh@10 -- # set +x 00:07:25.379 07:53:31 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:07:25.379 07:53:31 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:07:25.379 07:53:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:07:25.654 07:53:31 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:07:25.654 07:53:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:07:25.654 07:53:31 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:07:25.654 07:53:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:07:25.922 07:53:31 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:07:25.922 07:53:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:07:26.181 07:53:31 -- json_config/json_config.sh@246 -- # uname -s 00:07:26.181 07:53:31 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:07:26.181 07:53:31 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:07:26.181 07:53:31 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:07:26.181 07:53:31 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:07:26.181 07:53:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:26.181 07:53:31 -- common/autotest_common.sh@10 -- # set +x 00:07:26.181 07:53:31 -- json_config/json_config.sh@376 -- # killprocess 52590 00:07:26.181 07:53:31 -- common/autotest_common.sh@926 -- # '[' -z 52590 ']' 00:07:26.181 07:53:31 -- common/autotest_common.sh@930 -- # kill -0 52590 00:07:26.181 07:53:31 -- common/autotest_common.sh@931 -- # uname 00:07:26.181 07:53:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:26.181 07:53:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 52590 00:07:26.181 killing process with pid 52590 00:07:26.181 07:53:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:26.181 07:53:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:26.181 07:53:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 52590' 00:07:26.181 07:53:31 -- common/autotest_common.sh@945 -- # kill 52590 00:07:26.181 07:53:31 -- common/autotest_common.sh@950 -- # wait 52590 00:07:26.440 07:53:32 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:26.440 07:53:32 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:07:26.440 07:53:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:26.440 07:53:32 -- common/autotest_common.sh@10 -- # set +x 00:07:26.440 INFO: Success 00:07:26.440 07:53:32 -- json_config/json_config.sh@381 -- # return 0 00:07:26.440 07:53:32 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:07:26.440 ************************************ 00:07:26.440 END TEST json_config 00:07:26.440 ************************************ 00:07:26.440 00:07:26.440 real 0m9.543s 00:07:26.440 user 0m13.963s 00:07:26.440 sys 0m2.104s 00:07:26.440 07:53:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.440 07:53:32 -- common/autotest_common.sh@10 -- # set +x 00:07:26.440 07:53:32 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:26.440 07:53:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:26.440 07:53:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.440 07:53:32 -- common/autotest_common.sh@10 -- # set +x 00:07:26.440 ************************************ 00:07:26.440 START TEST json_config_extra_key 00:07:26.440 ************************************ 00:07:26.440 07:53:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:26.440 07:53:32 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:26.440 07:53:32 -- nvmf/common.sh@7 -- # uname -s 00:07:26.440 07:53:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.440 07:53:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.440 07:53:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.440 07:53:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.440 07:53:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.440 07:53:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.440 07:53:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.440 07:53:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.440 07:53:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.440 07:53:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.440 07:53:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a0d65948-2076-4c88-bacd-d5d371c8c3b5 00:07:26.440 07:53:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=a0d65948-2076-4c88-bacd-d5d371c8c3b5 00:07:26.440 07:53:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.440 07:53:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.440 07:53:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:26.440 07:53:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:26.440 07:53:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.440 07:53:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.440 07:53:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.440 07:53:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:26.440 07:53:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:26.440 07:53:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:26.440 07:53:32 -- paths/export.sh@5 -- # export PATH 00:07:26.440 07:53:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:26.440 07:53:32 -- nvmf/common.sh@46 -- # : 0 00:07:26.440 07:53:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:26.440 07:53:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:26.440 07:53:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:26.440 07:53:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.440 07:53:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.440 07:53:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:26.440 07:53:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:26.440 07:53:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:26.440 07:53:32 -- json_config/json_config_extra_key.sh@16 -- # app_pid=([target]="") 00:07:26.440 07:53:32 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:07:26.440 07:53:32 -- json_config/json_config_extra_key.sh@17 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock') 00:07:26.440 07:53:32 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:07:26.440 07:53:32 -- json_config/json_config_extra_key.sh@18 -- # app_params=([target]='-m 0x1 -s 1024') 00:07:26.440 07:53:32 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:07:26.441 07:53:32 -- json_config/json_config_extra_key.sh@19 -- # configs_path=([target]="$rootdir/test/json_config/extra_key.json") 00:07:26.441 07:53:32 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:07:26.441 INFO: launching applications... 00:07:26.441 07:53:32 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:26.441 07:53:32 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:07:26.441 07:53:32 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:26.441 07:53:32 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:07:26.441 07:53:32 -- json_config/json_config_extra_key.sh@25 -- # shift 00:07:26.441 07:53:32 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:07:26.441 07:53:32 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:07:26.441 Waiting for target to run... 00:07:26.441 07:53:32 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=52764 00:07:26.441 07:53:32 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:07:26.441 07:53:32 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 52764 /var/tmp/spdk_tgt.sock 00:07:26.441 07:53:32 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:26.441 07:53:32 -- common/autotest_common.sh@819 -- # '[' -z 52764 ']' 00:07:26.441 07:53:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:26.441 07:53:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:26.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:26.441 07:53:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:26.441 07:53:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:26.441 07:53:32 -- common/autotest_common.sh@10 -- # set +x 00:07:26.699 [2024-07-13 07:53:32.333992] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:26.700 [2024-07-13 07:53:32.334208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52764 ] 00:07:26.958 [2024-07-13 07:53:32.727626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.958 [2024-07-13 07:53:32.753875] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:26.958 [2024-07-13 07:53:32.754077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.532 00:07:27.532 INFO: shutting down applications... 00:07:27.532 07:53:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:27.532 07:53:33 -- common/autotest_common.sh@852 -- # return 0 00:07:27.532 07:53:33 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:07:27.532 07:53:33 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:07:27.532 07:53:33 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:07:27.532 07:53:33 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:07:27.532 07:53:33 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:07:27.532 07:53:33 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 52764 ]] 00:07:27.532 07:53:33 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 52764 00:07:27.532 07:53:33 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:07:27.532 07:53:33 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:27.532 07:53:33 -- json_config/json_config_extra_key.sh@50 -- # kill -0 52764 00:07:27.532 07:53:33 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:28.099 07:53:33 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:28.099 07:53:33 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:28.099 07:53:33 -- json_config/json_config_extra_key.sh@50 -- # kill -0 52764 00:07:28.099 07:53:33 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:07:28.100 07:53:33 -- json_config/json_config_extra_key.sh@52 -- # break 00:07:28.100 07:53:33 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:07:28.100 SPDK target shutdown done 00:07:28.100 07:53:33 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:07:28.100 Success 00:07:28.100 07:53:33 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:07:28.100 ************************************ 00:07:28.100 END TEST json_config_extra_key 00:07:28.100 ************************************ 00:07:28.100 00:07:28.100 real 0m1.556s 00:07:28.100 user 0m1.233s 00:07:28.100 sys 0m0.419s 00:07:28.100 07:53:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.100 07:53:33 -- common/autotest_common.sh@10 -- # set +x 00:07:28.100 07:53:33 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:28.100 07:53:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:28.100 07:53:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.100 07:53:33 -- common/autotest_common.sh@10 -- # set +x 00:07:28.100 ************************************ 00:07:28.100 START TEST alias_rpc 00:07:28.100 ************************************ 00:07:28.100 07:53:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:28.100 * Looking for test storage... 00:07:28.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:28.100 07:53:33 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:28.100 07:53:33 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=52849 00:07:28.100 07:53:33 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 52849 00:07:28.100 07:53:33 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:28.100 07:53:33 -- common/autotest_common.sh@819 -- # '[' -z 52849 ']' 00:07:28.100 07:53:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.100 07:53:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:28.100 07:53:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.100 07:53:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:28.100 07:53:33 -- common/autotest_common.sh@10 -- # set +x 00:07:28.359 [2024-07-13 07:53:33.941012] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:28.359 [2024-07-13 07:53:33.941209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52849 ] 00:07:28.359 [2024-07-13 07:53:34.076482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.359 [2024-07-13 07:53:34.125382] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:28.359 [2024-07-13 07:53:34.125840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.927 07:53:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:28.927 07:53:34 -- common/autotest_common.sh@852 -- # return 0 00:07:28.927 07:53:34 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:29.185 07:53:34 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 52849 00:07:29.185 07:53:34 -- common/autotest_common.sh@926 -- # '[' -z 52849 ']' 00:07:29.185 07:53:34 -- common/autotest_common.sh@930 -- # kill -0 52849 00:07:29.185 07:53:34 -- common/autotest_common.sh@931 -- # uname 00:07:29.185 07:53:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:29.185 07:53:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 52849 00:07:29.185 07:53:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:29.185 killing process with pid 52849 00:07:29.185 07:53:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:29.185 07:53:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 52849' 00:07:29.185 07:53:34 -- common/autotest_common.sh@945 -- # kill 52849 00:07:29.185 07:53:34 -- common/autotest_common.sh@950 -- # wait 52849 00:07:29.752 00:07:29.752 real 0m1.553s 00:07:29.752 user 0m1.567s 00:07:29.752 sys 0m0.411s 00:07:29.752 07:53:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.752 ************************************ 00:07:29.752 END TEST alias_rpc 00:07:29.752 ************************************ 00:07:29.752 07:53:35 -- common/autotest_common.sh@10 -- # set +x 00:07:29.752 07:53:35 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:07:29.752 07:53:35 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:29.752 07:53:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:29.752 07:53:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:29.752 07:53:35 -- common/autotest_common.sh@10 -- # set +x 00:07:29.752 ************************************ 00:07:29.752 START TEST spdkcli_tcp 00:07:29.752 ************************************ 00:07:29.752 07:53:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:29.752 * Looking for test storage... 00:07:29.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:29.752 07:53:35 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:29.752 07:53:35 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:29.752 07:53:35 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:29.753 07:53:35 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:29.753 07:53:35 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:29.753 07:53:35 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:29.753 07:53:35 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:29.753 07:53:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:29.753 07:53:35 -- common/autotest_common.sh@10 -- # set +x 00:07:29.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.753 07:53:35 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=52934 00:07:29.753 07:53:35 -- spdkcli/tcp.sh@27 -- # waitforlisten 52934 00:07:29.753 07:53:35 -- common/autotest_common.sh@819 -- # '[' -z 52934 ']' 00:07:29.753 07:53:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.753 07:53:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:29.753 07:53:35 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:29.753 07:53:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.753 07:53:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:29.753 07:53:35 -- common/autotest_common.sh@10 -- # set +x 00:07:29.753 [2024-07-13 07:53:35.555927] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:29.753 [2024-07-13 07:53:35.556122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52934 ] 00:07:30.012 [2024-07-13 07:53:35.703556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:30.012 [2024-07-13 07:53:35.753517] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:30.012 [2024-07-13 07:53:35.753952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.012 [2024-07-13 07:53:35.753888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.579 07:53:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:30.579 07:53:36 -- common/autotest_common.sh@852 -- # return 0 00:07:30.579 07:53:36 -- spdkcli/tcp.sh@31 -- # socat_pid=52956 00:07:30.579 07:53:36 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:30.579 07:53:36 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:30.838 [ 00:07:30.838 "spdk_get_version", 00:07:30.838 "rpc_get_methods", 00:07:30.838 "trace_get_info", 00:07:30.838 "trace_get_tpoint_group_mask", 00:07:30.838 "trace_disable_tpoint_group", 00:07:30.839 "trace_enable_tpoint_group", 00:07:30.839 "trace_clear_tpoint_mask", 00:07:30.839 "trace_set_tpoint_mask", 00:07:30.839 "framework_get_pci_devices", 00:07:30.839 "framework_get_config", 00:07:30.839 "framework_get_subsystems", 00:07:30.839 "iobuf_get_stats", 00:07:30.839 "iobuf_set_options", 00:07:30.839 "sock_set_default_impl", 00:07:30.839 "sock_impl_set_options", 00:07:30.839 "sock_impl_get_options", 00:07:30.839 "vmd_rescan", 00:07:30.839 "vmd_remove_device", 00:07:30.839 "vmd_enable", 00:07:30.839 "accel_get_stats", 00:07:30.839 "accel_set_options", 00:07:30.839 "accel_set_driver", 00:07:30.839 "accel_crypto_key_destroy", 00:07:30.839 "accel_crypto_keys_get", 00:07:30.839 "accel_crypto_key_create", 00:07:30.839 "accel_assign_opc", 00:07:30.839 "accel_get_module_info", 00:07:30.839 "accel_get_opc_assignments", 00:07:30.839 "notify_get_notifications", 00:07:30.839 "notify_get_types", 00:07:30.839 "bdev_get_histogram", 00:07:30.839 "bdev_enable_histogram", 00:07:30.839 "bdev_set_qos_limit", 00:07:30.839 "bdev_set_qd_sampling_period", 00:07:30.839 "bdev_get_bdevs", 00:07:30.839 "bdev_reset_iostat", 00:07:30.839 "bdev_get_iostat", 00:07:30.839 "bdev_examine", 00:07:30.839 "bdev_wait_for_examine", 00:07:30.839 "bdev_set_options", 00:07:30.839 "scsi_get_devices", 00:07:30.839 "thread_set_cpumask", 00:07:30.839 "framework_get_scheduler", 00:07:30.839 "framework_set_scheduler", 00:07:30.839 "framework_get_reactors", 00:07:30.839 "thread_get_io_channels", 00:07:30.839 "thread_get_pollers", 00:07:30.839 "thread_get_stats", 00:07:30.839 "framework_monitor_context_switch", 00:07:30.839 "spdk_kill_instance", 00:07:30.839 "log_enable_timestamps", 00:07:30.839 "log_get_flags", 00:07:30.839 "log_clear_flag", 00:07:30.839 "log_set_flag", 00:07:30.839 "log_get_level", 00:07:30.839 "log_set_level", 00:07:30.839 "log_get_print_level", 00:07:30.839 "log_set_print_level", 00:07:30.839 "framework_enable_cpumask_locks", 00:07:30.839 "framework_disable_cpumask_locks", 00:07:30.839 "framework_wait_init", 00:07:30.839 "framework_start_init", 00:07:30.839 "virtio_blk_create_transport", 00:07:30.839 "virtio_blk_get_transports", 00:07:30.839 "vhost_controller_set_coalescing", 00:07:30.839 "vhost_get_controllers", 00:07:30.839 "vhost_delete_controller", 00:07:30.839 "vhost_create_blk_controller", 00:07:30.839 "vhost_scsi_controller_remove_target", 00:07:30.839 "vhost_scsi_controller_add_target", 00:07:30.839 "vhost_start_scsi_controller", 00:07:30.839 "vhost_create_scsi_controller", 00:07:30.839 "nbd_get_disks", 00:07:30.839 "nbd_stop_disk", 00:07:30.839 "nbd_start_disk", 00:07:30.839 "env_dpdk_get_mem_stats", 00:07:30.839 "nvmf_subsystem_get_listeners", 00:07:30.839 "nvmf_subsystem_get_qpairs", 00:07:30.839 "nvmf_subsystem_get_controllers", 00:07:30.839 "nvmf_get_stats", 00:07:30.839 "nvmf_get_transports", 00:07:30.839 "nvmf_create_transport", 00:07:30.839 "nvmf_get_targets", 00:07:30.839 "nvmf_delete_target", 00:07:30.839 "nvmf_create_target", 00:07:30.839 "nvmf_subsystem_allow_any_host", 00:07:30.839 "nvmf_subsystem_remove_host", 00:07:30.839 "nvmf_subsystem_add_host", 00:07:30.839 "nvmf_subsystem_remove_ns", 00:07:30.839 "nvmf_subsystem_add_ns", 00:07:30.839 "nvmf_subsystem_listener_set_ana_state", 00:07:30.839 "nvmf_discovery_get_referrals", 00:07:30.839 "nvmf_discovery_remove_referral", 00:07:30.839 "nvmf_discovery_add_referral", 00:07:30.839 "nvmf_subsystem_remove_listener", 00:07:30.839 "nvmf_subsystem_add_listener", 00:07:30.839 "nvmf_delete_subsystem", 00:07:30.839 "nvmf_create_subsystem", 00:07:30.839 "nvmf_get_subsystems", 00:07:30.839 "nvmf_set_crdt", 00:07:30.839 "nvmf_set_config", 00:07:30.839 "nvmf_set_max_subsystems", 00:07:30.839 "iscsi_set_options", 00:07:30.839 "iscsi_get_auth_groups", 00:07:30.839 "iscsi_auth_group_remove_secret", 00:07:30.839 "iscsi_auth_group_add_secret", 00:07:30.839 "iscsi_delete_auth_group", 00:07:30.839 "iscsi_create_auth_group", 00:07:30.839 "iscsi_set_discovery_auth", 00:07:30.839 "iscsi_get_options", 00:07:30.839 "iscsi_target_node_request_logout", 00:07:30.839 "iscsi_target_node_set_redirect", 00:07:30.839 "iscsi_target_node_set_auth", 00:07:30.839 "iscsi_target_node_add_lun", 00:07:30.839 "iscsi_get_connections", 00:07:30.839 "iscsi_portal_group_set_auth", 00:07:30.839 "iscsi_start_portal_group", 00:07:30.839 "iscsi_delete_portal_group", 00:07:30.839 "iscsi_create_portal_group", 00:07:30.839 "iscsi_get_portal_groups", 00:07:30.839 "iscsi_delete_target_node", 00:07:30.839 "iscsi_target_node_remove_pg_ig_maps", 00:07:30.839 "iscsi_target_node_add_pg_ig_maps", 00:07:30.839 "iscsi_create_target_node", 00:07:30.839 "iscsi_get_target_nodes", 00:07:30.839 "iscsi_delete_initiator_group", 00:07:30.839 "iscsi_initiator_group_remove_initiators", 00:07:30.839 "iscsi_initiator_group_add_initiators", 00:07:30.839 "iscsi_create_initiator_group", 00:07:30.839 "iscsi_get_initiator_groups", 00:07:30.839 "iaa_scan_accel_module", 00:07:30.839 "dsa_scan_accel_module", 00:07:30.839 "ioat_scan_accel_module", 00:07:30.839 "accel_error_inject_error", 00:07:30.839 "bdev_daos_resize", 00:07:30.839 "bdev_daos_delete", 00:07:30.839 "bdev_daos_create", 00:07:30.839 "bdev_virtio_attach_controller", 00:07:30.839 "bdev_virtio_scsi_get_devices", 00:07:30.839 "bdev_virtio_detach_controller", 00:07:30.839 "bdev_virtio_blk_set_hotplug", 00:07:30.839 "bdev_ftl_set_property", 00:07:30.839 "bdev_ftl_get_properties", 00:07:30.839 "bdev_ftl_get_stats", 00:07:30.839 "bdev_ftl_unmap", 00:07:30.839 "bdev_ftl_unload", 00:07:30.839 "bdev_ftl_delete", 00:07:30.839 "bdev_ftl_load", 00:07:30.839 "bdev_ftl_create", 00:07:30.839 "bdev_aio_delete", 00:07:30.839 "bdev_aio_rescan", 00:07:30.839 "bdev_aio_create", 00:07:30.839 "blobfs_create", 00:07:30.839 "blobfs_detect", 00:07:30.839 "blobfs_set_cache_size", 00:07:30.839 "bdev_zone_block_delete", 00:07:30.839 "bdev_zone_block_create", 00:07:30.839 "bdev_delay_delete", 00:07:30.839 "bdev_delay_create", 00:07:30.839 "bdev_delay_update_latency", 00:07:30.839 "bdev_split_delete", 00:07:30.839 "bdev_split_create", 00:07:30.839 "bdev_error_inject_error", 00:07:30.839 "bdev_error_delete", 00:07:30.839 "bdev_error_create", 00:07:30.839 "bdev_raid_set_options", 00:07:30.839 "bdev_raid_remove_base_bdev", 00:07:30.839 "bdev_raid_add_base_bdev", 00:07:30.839 "bdev_raid_delete", 00:07:30.839 "bdev_raid_create", 00:07:30.839 "bdev_raid_get_bdevs", 00:07:30.839 "bdev_lvol_grow_lvstore", 00:07:30.839 "bdev_lvol_get_lvols", 00:07:30.839 "bdev_lvol_get_lvstores", 00:07:30.839 "bdev_lvol_delete", 00:07:30.839 "bdev_lvol_set_read_only", 00:07:30.839 "bdev_lvol_resize", 00:07:30.839 "bdev_lvol_decouple_parent", 00:07:30.839 "bdev_lvol_inflate", 00:07:30.839 "bdev_lvol_rename", 00:07:30.839 "bdev_lvol_clone_bdev", 00:07:30.839 "bdev_lvol_clone", 00:07:30.839 "bdev_lvol_snapshot", 00:07:30.839 "bdev_lvol_create", 00:07:30.839 "bdev_lvol_delete_lvstore", 00:07:30.839 "bdev_lvol_rename_lvstore", 00:07:30.839 "bdev_lvol_create_lvstore", 00:07:30.839 "bdev_passthru_delete", 00:07:30.839 "bdev_passthru_create", 00:07:30.839 "bdev_nvme_cuse_unregister", 00:07:30.839 "bdev_nvme_cuse_register", 00:07:30.839 "bdev_opal_new_user", 00:07:30.839 "bdev_opal_set_lock_state", 00:07:30.839 "bdev_opal_delete", 00:07:30.839 "bdev_opal_get_info", 00:07:30.839 "bdev_opal_create", 00:07:30.839 "bdev_nvme_opal_revert", 00:07:30.839 "bdev_nvme_opal_init", 00:07:30.839 "bdev_nvme_send_cmd", 00:07:30.839 "bdev_nvme_get_path_iostat", 00:07:30.839 "bdev_nvme_get_mdns_discovery_info", 00:07:30.839 "bdev_nvme_stop_mdns_discovery", 00:07:30.839 "bdev_nvme_start_mdns_discovery", 00:07:30.839 "bdev_nvme_set_multipath_policy", 00:07:30.839 "bdev_nvme_set_preferred_path", 00:07:30.839 "bdev_nvme_get_io_paths", 00:07:30.839 "bdev_nvme_remove_error_injection", 00:07:30.839 "bdev_nvme_add_error_injection", 00:07:30.839 "bdev_nvme_get_discovery_info", 00:07:30.839 "bdev_nvme_stop_discovery", 00:07:30.839 "bdev_nvme_start_discovery", 00:07:30.839 "bdev_nvme_get_controller_health_info", 00:07:30.839 "bdev_nvme_disable_controller", 00:07:30.839 "bdev_nvme_enable_controller", 00:07:30.839 "bdev_nvme_reset_controller", 00:07:30.839 "bdev_nvme_get_transport_statistics", 00:07:30.839 "bdev_nvme_apply_firmware", 00:07:30.839 "bdev_nvme_detach_controller", 00:07:30.839 "bdev_nvme_get_controllers", 00:07:30.839 "bdev_nvme_attach_controller", 00:07:30.839 "bdev_nvme_set_hotplug", 00:07:30.839 "bdev_nvme_set_options", 00:07:30.839 "bdev_null_resize", 00:07:30.839 "bdev_null_delete", 00:07:30.839 "bdev_null_create", 00:07:30.839 "bdev_malloc_delete", 00:07:30.839 "bdev_malloc_create" 00:07:30.839 ] 00:07:30.839 07:53:36 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:30.839 07:53:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:30.839 07:53:36 -- common/autotest_common.sh@10 -- # set +x 00:07:30.839 07:53:36 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:30.839 07:53:36 -- spdkcli/tcp.sh@38 -- # killprocess 52934 00:07:30.839 07:53:36 -- common/autotest_common.sh@926 -- # '[' -z 52934 ']' 00:07:30.839 07:53:36 -- common/autotest_common.sh@930 -- # kill -0 52934 00:07:30.839 07:53:36 -- common/autotest_common.sh@931 -- # uname 00:07:30.839 07:53:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:30.839 07:53:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 52934 00:07:30.839 killing process with pid 52934 00:07:30.839 07:53:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:30.839 07:53:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:30.839 07:53:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 52934' 00:07:30.839 07:53:36 -- common/autotest_common.sh@945 -- # kill 52934 00:07:30.839 07:53:36 -- common/autotest_common.sh@950 -- # wait 52934 00:07:31.405 ************************************ 00:07:31.405 END TEST spdkcli_tcp 00:07:31.405 ************************************ 00:07:31.405 00:07:31.405 real 0m1.620s 00:07:31.405 user 0m2.756s 00:07:31.405 sys 0m0.477s 00:07:31.405 07:53:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.405 07:53:36 -- common/autotest_common.sh@10 -- # set +x 00:07:31.405 07:53:36 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:31.405 07:53:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:31.405 07:53:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.405 07:53:36 -- common/autotest_common.sh@10 -- # set +x 00:07:31.405 ************************************ 00:07:31.405 START TEST dpdk_mem_utility 00:07:31.405 ************************************ 00:07:31.405 07:53:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:31.405 * Looking for test storage... 00:07:31.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:31.405 07:53:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:31.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.405 07:53:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=53038 00:07:31.405 07:53:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 53038 00:07:31.405 07:53:37 -- common/autotest_common.sh@819 -- # '[' -z 53038 ']' 00:07:31.405 07:53:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.405 07:53:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:31.405 07:53:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:31.405 07:53:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.405 07:53:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:31.405 07:53:37 -- common/autotest_common.sh@10 -- # set +x 00:07:31.405 [2024-07-13 07:53:37.214444] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:31.405 [2024-07-13 07:53:37.214643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53038 ] 00:07:31.663 [2024-07-13 07:53:37.356014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.663 [2024-07-13 07:53:37.407198] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:31.663 [2024-07-13 07:53:37.407407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.230 07:53:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:32.230 07:53:38 -- common/autotest_common.sh@852 -- # return 0 00:07:32.230 07:53:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:32.230 07:53:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:32.230 07:53:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:32.230 07:53:38 -- common/autotest_common.sh@10 -- # set +x 00:07:32.230 { 00:07:32.230 "filename": "/tmp/spdk_mem_dump.txt" 00:07:32.230 } 00:07:32.230 07:53:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:32.230 07:53:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:32.489 DPDK memory size 862.000000 MiB in 1 heap(s) 00:07:32.489 1 heaps totaling size 862.000000 MiB 00:07:32.489 size: 862.000000 MiB heap id: 0 00:07:32.489 end heaps---------- 00:07:32.489 8 mempools totaling size 646.224487 MiB 00:07:32.489 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:32.489 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:32.489 size: 132.629456 MiB name: bdev_io_53038 00:07:32.489 size: 51.011292 MiB name: evtpool_53038 00:07:32.489 size: 50.003479 MiB name: msgpool_53038 00:07:32.489 size: 21.763794 MiB name: PDU_Pool 00:07:32.489 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:32.489 size: 0.026123 MiB name: Session_Pool 00:07:32.489 end mempools------- 00:07:32.489 6 memzones totaling size 4.142822 MiB 00:07:32.489 size: 1.000366 MiB name: RG_ring_0_53038 00:07:32.489 size: 1.000366 MiB name: RG_ring_1_53038 00:07:32.489 size: 1.000366 MiB name: RG_ring_4_53038 00:07:32.489 size: 1.000366 MiB name: RG_ring_5_53038 00:07:32.489 size: 0.125366 MiB name: RG_ring_2_53038 00:07:32.489 size: 0.015991 MiB name: RG_ring_3_53038 00:07:32.489 end memzones------- 00:07:32.489 07:53:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:32.489 heap id: 0 total size: 862.000000 MiB number of busy elements: 260 number of free elements: 15 00:07:32.489 list of free elements. size: 12.370850 MiB 00:07:32.489 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:32.489 element at address: 0x20001be00000 with size: 0.999878 MiB 00:07:32.490 element at address: 0x20001c000000 with size: 0.999878 MiB 00:07:32.490 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:32.490 element at address: 0x200034c00000 with size: 0.994446 MiB 00:07:32.490 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:32.490 element at address: 0x20001c200000 with size: 0.936584 MiB 00:07:32.490 element at address: 0x200013800000 with size: 0.870300 MiB 00:07:32.490 element at address: 0x200000200000 with size: 0.836121 MiB 00:07:32.490 element at address: 0x20001da00000 with size: 0.568420 MiB 00:07:32.490 element at address: 0x20000b200000 with size: 0.489624 MiB 00:07:32.490 element at address: 0x200000800000 with size: 0.486694 MiB 00:07:32.490 element at address: 0x20001c400000 with size: 0.485657 MiB 00:07:32.490 element at address: 0x20002ae00000 with size: 0.401428 MiB 00:07:32.490 element at address: 0x200003a00000 with size: 0.346191 MiB 00:07:32.490 list of standard malloc elements. size: 199.258179 MiB 00:07:32.490 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:32.490 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:32.490 element at address: 0x20001befff80 with size: 1.000122 MiB 00:07:32.490 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:07:32.490 element at address: 0x20001c2fff80 with size: 1.000122 MiB 00:07:32.490 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:32.490 element at address: 0x20001c2eff00 with size: 0.062622 MiB 00:07:32.490 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:32.490 element at address: 0x20001c2efdc0 with size: 0.000305 MiB 00:07:32.490 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20000087c980 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a58a00 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a58ac0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a58b80 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a58c40 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a58d00 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a58dc0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a58e80 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a58f40 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a59000 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a59180 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a59240 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a59300 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a59480 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a59540 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a59600 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a59780 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a59840 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a59900 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:32.490 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:32.490 element at address: 0x2000138decc0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001c2efc40 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001c2efd00 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001c4bc740 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001da91840 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001da91900 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001da919c0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001da91a80 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001da91b40 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001da91c00 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001da91cc0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001da91d80 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001da91e40 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001da91f00 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001da91fc0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001da92080 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001da92140 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001da92200 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001da922c0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001da92380 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001da92440 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001da92500 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001da925c0 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001da92680 with size: 0.000183 MiB 00:07:32.490 element at address: 0x20001da92740 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da92800 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da928c0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da92980 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da92a40 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da92b00 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da92bc0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da92c80 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da92d40 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da92e00 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da92ec0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da92f80 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da93040 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da93100 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da931c0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da93280 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da93340 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da93400 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da934c0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da93580 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da93640 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da93700 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da937c0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da93880 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da93940 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da93a00 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da93ac0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da93b80 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da93c40 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da93d00 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da93dc0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da93e80 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da93f40 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da94000 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da940c0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da94180 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da94240 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da94300 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da943c0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da94480 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da94540 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da94600 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da946c0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da94780 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da94840 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da94900 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da949c0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da94a80 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da94b40 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da94c00 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da94cc0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da94d80 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da94e40 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da94f00 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da94fc0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da95080 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da95140 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da95200 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da952c0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da95380 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20001da95440 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae66c40 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae66d00 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6d900 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6db00 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6dbc0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6dc80 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6dd40 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6de00 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6dec0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6df80 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6e040 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6e100 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6e1c0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6e280 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6e340 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6e400 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6e4c0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6e580 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6e640 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6e700 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6e7c0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6e880 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6e940 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6ea00 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6eac0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6eb80 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6ec40 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6ed00 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6edc0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6ee80 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6ef40 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6f000 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6f0c0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6f180 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6f240 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6f300 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6f3c0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6f480 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6f540 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6f600 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6f6c0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6f780 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6f840 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6f900 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6f9c0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6fa80 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6fb40 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6fc00 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6fcc0 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6fd80 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6fe40 with size: 0.000183 MiB 00:07:32.491 element at address: 0x20002ae6ff00 with size: 0.000183 MiB 00:07:32.491 list of memzone associated elements. size: 650.370972 MiB 00:07:32.491 element at address: 0x20001da95500 with size: 211.416748 MiB 00:07:32.491 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:32.491 element at address: 0x20002ae6ffc0 with size: 157.562561 MiB 00:07:32.491 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:32.491 element at address: 0x2000139def80 with size: 132.129028 MiB 00:07:32.491 associated memzone info: size: 132.128906 MiB name: MP_bdev_io_53038_0 00:07:32.491 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:32.491 associated memzone info: size: 48.002930 MiB name: MP_evtpool_53038_0 00:07:32.491 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:32.491 associated memzone info: size: 48.002930 MiB name: MP_msgpool_53038_0 00:07:32.491 element at address: 0x20001c5be940 with size: 20.255554 MiB 00:07:32.491 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:32.491 element at address: 0x200034dfeb40 with size: 18.005066 MiB 00:07:32.491 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:32.491 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:32.491 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_53038 00:07:32.491 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:32.491 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_53038 00:07:32.491 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:32.491 associated memzone info: size: 1.007996 MiB name: MP_evtpool_53038 00:07:32.491 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:32.491 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:32.491 element at address: 0x20001c4bc800 with size: 1.008118 MiB 00:07:32.491 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:32.491 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:32.491 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:32.491 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:32.491 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:32.491 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:32.491 associated memzone info: size: 1.000366 MiB name: RG_ring_0_53038 00:07:32.491 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:32.491 associated memzone info: size: 1.000366 MiB name: RG_ring_1_53038 00:07:32.491 element at address: 0x2000138ded80 with size: 1.000488 MiB 00:07:32.491 associated memzone info: size: 1.000366 MiB name: RG_ring_4_53038 00:07:32.491 element at address: 0x200034cfe940 with size: 1.000488 MiB 00:07:32.491 associated memzone info: size: 1.000366 MiB name: RG_ring_5_53038 00:07:32.491 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:32.491 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_53038 00:07:32.491 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:32.491 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:32.491 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:32.491 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:32.491 element at address: 0x20001c47c540 with size: 0.250488 MiB 00:07:32.491 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:32.491 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:32.492 associated memzone info: size: 0.125366 MiB name: RG_ring_2_53038 00:07:32.492 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:32.492 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:32.492 element at address: 0x20002ae66dc0 with size: 0.023743 MiB 00:07:32.492 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:32.492 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:32.492 associated memzone info: size: 0.015991 MiB name: RG_ring_3_53038 00:07:32.492 element at address: 0x20002ae6cf00 with size: 0.002441 MiB 00:07:32.492 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:32.492 element at address: 0x2000002d6e40 with size: 0.000305 MiB 00:07:32.492 associated memzone info: size: 0.000183 MiB name: MP_msgpool_53038 00:07:32.492 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:32.492 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_53038 00:07:32.492 element at address: 0x20002ae6d9c0 with size: 0.000305 MiB 00:07:32.492 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:32.492 07:53:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:32.492 07:53:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 53038 00:07:32.492 07:53:38 -- common/autotest_common.sh@926 -- # '[' -z 53038 ']' 00:07:32.492 07:53:38 -- common/autotest_common.sh@930 -- # kill -0 53038 00:07:32.492 07:53:38 -- common/autotest_common.sh@931 -- # uname 00:07:32.492 07:53:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:32.492 07:53:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 53038 00:07:32.492 killing process with pid 53038 00:07:32.492 07:53:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:32.492 07:53:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:32.492 07:53:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53038' 00:07:32.492 07:53:38 -- common/autotest_common.sh@945 -- # kill 53038 00:07:32.492 07:53:38 -- common/autotest_common.sh@950 -- # wait 53038 00:07:32.750 ************************************ 00:07:32.750 END TEST dpdk_mem_utility 00:07:32.750 ************************************ 00:07:32.750 00:07:32.750 real 0m1.537s 00:07:32.750 user 0m1.536s 00:07:32.750 sys 0m0.414s 00:07:32.750 07:53:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.750 07:53:38 -- common/autotest_common.sh@10 -- # set +x 00:07:33.007 07:53:38 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:33.007 07:53:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:33.007 07:53:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.007 07:53:38 -- common/autotest_common.sh@10 -- # set +x 00:07:33.008 ************************************ 00:07:33.008 START TEST event 00:07:33.008 ************************************ 00:07:33.008 07:53:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:33.008 * Looking for test storage... 00:07:33.008 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:33.008 07:53:38 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:33.008 07:53:38 -- bdev/nbd_common.sh@6 -- # set -e 00:07:33.008 07:53:38 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:33.008 07:53:38 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:33.008 07:53:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.008 07:53:38 -- common/autotest_common.sh@10 -- # set +x 00:07:33.008 ************************************ 00:07:33.008 START TEST event_perf 00:07:33.008 ************************************ 00:07:33.008 07:53:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:33.008 Running I/O for 1 seconds...[2024-07-13 07:53:38.701104] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:33.008 [2024-07-13 07:53:38.701357] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53128 ] 00:07:33.265 [2024-07-13 07:53:38.857541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.265 [2024-07-13 07:53:38.911732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.265 [2024-07-13 07:53:38.911710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.265 [2024-07-13 07:53:38.911733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.265 [2024-07-13 07:53:38.911673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.199 Running I/O for 1 seconds... 00:07:34.199 lcore 0: 270918 00:07:34.199 lcore 1: 270918 00:07:34.199 lcore 2: 270917 00:07:34.199 lcore 3: 270917 00:07:34.199 done. 00:07:34.199 ************************************ 00:07:34.199 END TEST event_perf 00:07:34.199 ************************************ 00:07:34.199 00:07:34.199 real 0m1.318s 00:07:34.199 user 0m4.131s 00:07:34.199 sys 0m0.092s 00:07:34.199 07:53:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.199 07:53:39 -- common/autotest_common.sh@10 -- # set +x 00:07:34.457 07:53:40 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:34.457 07:53:40 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:34.457 07:53:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:34.457 07:53:40 -- common/autotest_common.sh@10 -- # set +x 00:07:34.457 ************************************ 00:07:34.457 START TEST event_reactor 00:07:34.457 ************************************ 00:07:34.457 07:53:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:34.457 [2024-07-13 07:53:40.066788] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:34.457 [2024-07-13 07:53:40.067012] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53176 ] 00:07:34.457 [2024-07-13 07:53:40.211491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.457 [2024-07-13 07:53:40.262350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.829 test_start 00:07:35.829 oneshot 00:07:35.829 tick 100 00:07:35.829 tick 100 00:07:35.829 tick 250 00:07:35.829 tick 100 00:07:35.829 tick 100 00:07:35.829 tick 100 00:07:35.829 tick 250 00:07:35.829 tick 500 00:07:35.829 tick 100 00:07:35.829 tick 100 00:07:35.829 tick 250 00:07:35.829 tick 100 00:07:35.829 tick 100 00:07:35.829 test_end 00:07:35.829 ************************************ 00:07:35.829 END TEST event_reactor 00:07:35.829 ************************************ 00:07:35.829 00:07:35.829 real 0m1.296s 00:07:35.829 user 0m1.111s 00:07:35.829 sys 0m0.084s 00:07:35.829 07:53:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.829 07:53:41 -- common/autotest_common.sh@10 -- # set +x 00:07:35.829 07:53:41 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:35.829 07:53:41 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:35.829 07:53:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:35.829 07:53:41 -- common/autotest_common.sh@10 -- # set +x 00:07:35.829 ************************************ 00:07:35.829 START TEST event_reactor_perf 00:07:35.829 ************************************ 00:07:35.829 07:53:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:35.829 [2024-07-13 07:53:41.482647] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:35.829 [2024-07-13 07:53:41.482917] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53219 ] 00:07:35.829 [2024-07-13 07:53:41.636806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.088 [2024-07-13 07:53:41.687873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.055 test_start 00:07:37.055 test_end 00:07:37.055 Performance: 687761 events per second 00:07:37.055 00:07:37.055 real 0m1.315s 00:07:37.055 user 0m1.119s 00:07:37.055 sys 0m0.095s 00:07:37.055 07:53:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.055 ************************************ 00:07:37.055 END TEST event_reactor_perf 00:07:37.055 ************************************ 00:07:37.055 07:53:42 -- common/autotest_common.sh@10 -- # set +x 00:07:37.055 07:53:42 -- event/event.sh@49 -- # uname -s 00:07:37.055 07:53:42 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:37.055 07:53:42 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:37.055 07:53:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:37.055 07:53:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.055 07:53:42 -- common/autotest_common.sh@10 -- # set +x 00:07:37.055 ************************************ 00:07:37.055 START TEST event_scheduler 00:07:37.055 ************************************ 00:07:37.055 07:53:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:37.313 * Looking for test storage... 00:07:37.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:37.313 07:53:42 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:37.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.313 07:53:42 -- scheduler/scheduler.sh@35 -- # scheduler_pid=53296 00:07:37.313 07:53:42 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:37.313 07:53:42 -- scheduler/scheduler.sh@37 -- # waitforlisten 53296 00:07:37.313 07:53:42 -- common/autotest_common.sh@819 -- # '[' -z 53296 ']' 00:07:37.313 07:53:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.313 07:53:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:37.313 07:53:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.313 07:53:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:37.313 07:53:42 -- common/autotest_common.sh@10 -- # set +x 00:07:37.313 07:53:42 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:37.313 [2024-07-13 07:53:43.046514] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:37.313 [2024-07-13 07:53:43.046742] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53296 ] 00:07:37.572 [2024-07-13 07:53:43.185467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.572 [2024-07-13 07:53:43.237950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.572 [2024-07-13 07:53:43.238064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.572 [2024-07-13 07:53:43.238239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.572 [2024-07-13 07:53:43.238406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.138 07:53:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:38.138 07:53:43 -- common/autotest_common.sh@852 -- # return 0 00:07:38.138 07:53:43 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:38.138 07:53:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:38.138 07:53:43 -- common/autotest_common.sh@10 -- # set +x 00:07:38.138 POWER: Env isn't set yet! 00:07:38.138 POWER: Attempting to initialise ACPI cpufreq power management... 00:07:38.138 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:38.138 POWER: Cannot set governor of lcore 0 to userspace 00:07:38.138 POWER: Attempting to initialise PSTAT power management... 00:07:38.138 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:38.138 POWER: Cannot set governor of lcore 0 to performance 00:07:38.138 POWER: Attempting to initialise CPPC power management... 00:07:38.138 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:38.138 POWER: Cannot set governor of lcore 0 to userspace 00:07:38.138 POWER: Attempting to initialise VM power management... 00:07:38.138 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:38.138 POWER: Unable to set Power Management Environment for lcore 0 00:07:38.138 [2024-07-13 07:53:43.923836] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:07:38.138 [2024-07-13 07:53:43.923864] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:07:38.138 [2024-07-13 07:53:43.923917] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:07:38.138 [2024-07-13 07:53:43.923953] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:38.138 [2024-07-13 07:53:43.923984] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:38.138 [2024-07-13 07:53:43.924019] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:38.138 07:53:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:38.138 07:53:43 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:38.138 07:53:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:38.138 07:53:43 -- common/autotest_common.sh@10 -- # set +x 00:07:38.397 [2024-07-13 07:53:44.018102] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:38.397 07:53:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:38.397 07:53:44 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:38.397 07:53:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:38.397 07:53:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.397 07:53:44 -- common/autotest_common.sh@10 -- # set +x 00:07:38.397 ************************************ 00:07:38.397 START TEST scheduler_create_thread 00:07:38.397 ************************************ 00:07:38.397 07:53:44 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:07:38.397 07:53:44 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:38.397 07:53:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:38.397 07:53:44 -- common/autotest_common.sh@10 -- # set +x 00:07:38.397 2 00:07:38.397 07:53:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:38.397 07:53:44 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:38.397 07:53:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:38.397 07:53:44 -- common/autotest_common.sh@10 -- # set +x 00:07:38.397 3 00:07:38.397 07:53:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:38.397 07:53:44 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:38.397 07:53:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:38.397 07:53:44 -- common/autotest_common.sh@10 -- # set +x 00:07:38.397 4 00:07:38.397 07:53:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:38.397 07:53:44 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:38.397 07:53:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:38.397 07:53:44 -- common/autotest_common.sh@10 -- # set +x 00:07:38.397 5 00:07:38.397 07:53:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:38.397 07:53:44 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:38.397 07:53:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:38.397 07:53:44 -- common/autotest_common.sh@10 -- # set +x 00:07:38.397 6 00:07:38.397 07:53:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:38.397 07:53:44 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:38.397 07:53:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:38.397 07:53:44 -- common/autotest_common.sh@10 -- # set +x 00:07:38.397 7 00:07:38.397 07:53:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:38.397 07:53:44 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:38.397 07:53:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:38.397 07:53:44 -- common/autotest_common.sh@10 -- # set +x 00:07:38.397 8 00:07:38.397 07:53:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:38.397 07:53:44 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:38.397 07:53:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:38.397 07:53:44 -- common/autotest_common.sh@10 -- # set +x 00:07:38.397 9 00:07:38.397 07:53:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:38.397 07:53:44 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:38.397 07:53:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:38.397 07:53:44 -- common/autotest_common.sh@10 -- # set +x 00:07:38.397 10 00:07:38.397 07:53:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:38.397 07:53:44 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:38.397 07:53:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:38.397 07:53:44 -- common/autotest_common.sh@10 -- # set +x 00:07:38.397 07:53:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:38.397 07:53:44 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:38.397 07:53:44 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:38.397 07:53:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:38.397 07:53:44 -- common/autotest_common.sh@10 -- # set +x 00:07:38.397 07:53:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:38.397 07:53:44 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:38.397 07:53:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:38.397 07:53:44 -- common/autotest_common.sh@10 -- # set +x 00:07:38.397 07:53:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:38.397 07:53:44 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:38.397 07:53:44 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:38.397 07:53:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:38.397 07:53:44 -- common/autotest_common.sh@10 -- # set +x 00:07:39.776 07:53:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:39.776 ************************************ 00:07:39.776 END TEST scheduler_create_thread 00:07:39.776 ************************************ 00:07:39.776 00:07:39.776 real 0m1.169s 00:07:39.776 user 0m0.008s 00:07:39.776 sys 0m0.004s 00:07:39.776 07:53:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.776 07:53:45 -- common/autotest_common.sh@10 -- # set +x 00:07:39.776 07:53:45 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:39.776 07:53:45 -- scheduler/scheduler.sh@46 -- # killprocess 53296 00:07:39.776 07:53:45 -- common/autotest_common.sh@926 -- # '[' -z 53296 ']' 00:07:39.776 07:53:45 -- common/autotest_common.sh@930 -- # kill -0 53296 00:07:39.776 07:53:45 -- common/autotest_common.sh@931 -- # uname 00:07:39.776 07:53:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:39.776 07:53:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 53296 00:07:39.776 killing process with pid 53296 00:07:39.776 07:53:45 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:07:39.776 07:53:45 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:07:39.776 07:53:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53296' 00:07:39.776 07:53:45 -- common/autotest_common.sh@945 -- # kill 53296 00:07:39.776 07:53:45 -- common/autotest_common.sh@950 -- # wait 53296 00:07:40.035 [2024-07-13 07:53:45.680019] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:40.294 00:07:40.294 real 0m3.064s 00:07:40.294 user 0m5.292s 00:07:40.294 sys 0m0.365s 00:07:40.294 07:53:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.294 ************************************ 00:07:40.294 END TEST event_scheduler 00:07:40.294 ************************************ 00:07:40.294 07:53:45 -- common/autotest_common.sh@10 -- # set +x 00:07:40.294 07:53:45 -- event/event.sh@51 -- # modprobe -n nbd 00:07:40.294 modprobe: FATAL: Module nbd not found. 00:07:40.294 07:53:45 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:40.294 07:53:45 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:40.294 07:53:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:40.294 07:53:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.294 07:53:45 -- common/autotest_common.sh@10 -- # set +x 00:07:40.294 ************************************ 00:07:40.294 START TEST cpu_locks 00:07:40.294 ************************************ 00:07:40.294 07:53:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:40.294 * Looking for test storage... 00:07:40.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:40.294 07:53:46 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:40.294 07:53:46 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:40.294 07:53:46 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:40.294 07:53:46 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:40.294 07:53:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:40.294 07:53:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.294 07:53:46 -- common/autotest_common.sh@10 -- # set +x 00:07:40.294 ************************************ 00:07:40.294 START TEST default_locks 00:07:40.294 ************************************ 00:07:40.294 07:53:46 -- common/autotest_common.sh@1104 -- # default_locks 00:07:40.294 07:53:46 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=53439 00:07:40.294 07:53:46 -- event/cpu_locks.sh@47 -- # waitforlisten 53439 00:07:40.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.294 07:53:46 -- common/autotest_common.sh@819 -- # '[' -z 53439 ']' 00:07:40.294 07:53:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.294 07:53:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:40.294 07:53:46 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:40.294 07:53:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.294 07:53:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:40.294 07:53:46 -- common/autotest_common.sh@10 -- # set +x 00:07:40.554 [2024-07-13 07:53:46.184391] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:40.554 [2024-07-13 07:53:46.184682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53439 ] 00:07:40.554 [2024-07-13 07:53:46.332565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.813 [2024-07-13 07:53:46.388760] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:40.813 [2024-07-13 07:53:46.389043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.381 07:53:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:41.381 07:53:46 -- common/autotest_common.sh@852 -- # return 0 00:07:41.381 07:53:46 -- event/cpu_locks.sh@49 -- # locks_exist 53439 00:07:41.381 07:53:46 -- event/cpu_locks.sh@22 -- # lslocks -p 53439 00:07:41.381 07:53:46 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:42.318 07:53:47 -- event/cpu_locks.sh@50 -- # killprocess 53439 00:07:42.318 07:53:47 -- common/autotest_common.sh@926 -- # '[' -z 53439 ']' 00:07:42.318 07:53:47 -- common/autotest_common.sh@930 -- # kill -0 53439 00:07:42.318 07:53:47 -- common/autotest_common.sh@931 -- # uname 00:07:42.318 07:53:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:42.318 07:53:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 53439 00:07:42.318 07:53:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:42.318 07:53:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:42.318 killing process with pid 53439 00:07:42.318 07:53:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53439' 00:07:42.318 07:53:47 -- common/autotest_common.sh@945 -- # kill 53439 00:07:42.318 07:53:47 -- common/autotest_common.sh@950 -- # wait 53439 00:07:42.577 07:53:48 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 53439 00:07:42.577 07:53:48 -- common/autotest_common.sh@640 -- # local es=0 00:07:42.577 07:53:48 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 53439 00:07:42.577 07:53:48 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:42.577 07:53:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:42.577 07:53:48 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:42.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.577 07:53:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:42.577 07:53:48 -- common/autotest_common.sh@643 -- # waitforlisten 53439 00:07:42.577 07:53:48 -- common/autotest_common.sh@819 -- # '[' -z 53439 ']' 00:07:42.577 07:53:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.577 07:53:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:42.577 07:53:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.577 07:53:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:42.577 07:53:48 -- common/autotest_common.sh@10 -- # set +x 00:07:42.578 ERROR: process (pid: 53439) is no longer running 00:07:42.578 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (53439) - No such process 00:07:42.578 07:53:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:42.578 07:53:48 -- common/autotest_common.sh@852 -- # return 1 00:07:42.578 07:53:48 -- common/autotest_common.sh@643 -- # es=1 00:07:42.578 07:53:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:42.578 07:53:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:42.578 07:53:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:42.578 07:53:48 -- event/cpu_locks.sh@54 -- # no_locks 00:07:42.578 07:53:48 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:07:42.578 07:53:48 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:42.578 ************************************ 00:07:42.578 END TEST default_locks 00:07:42.578 ************************************ 00:07:42.578 07:53:48 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:42.578 00:07:42.578 real 0m2.109s 00:07:42.578 user 0m2.176s 00:07:42.578 sys 0m1.012s 00:07:42.578 07:53:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.578 07:53:48 -- common/autotest_common.sh@10 -- # set +x 00:07:42.578 07:53:48 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:42.578 07:53:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:42.578 07:53:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.578 07:53:48 -- common/autotest_common.sh@10 -- # set +x 00:07:42.578 ************************************ 00:07:42.578 START TEST default_locks_via_rpc 00:07:42.578 ************************************ 00:07:42.578 07:53:48 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:07:42.578 07:53:48 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=53500 00:07:42.578 07:53:48 -- event/cpu_locks.sh@63 -- # waitforlisten 53500 00:07:42.578 07:53:48 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:42.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.578 07:53:48 -- common/autotest_common.sh@819 -- # '[' -z 53500 ']' 00:07:42.578 07:53:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.578 07:53:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:42.578 07:53:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.578 07:53:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:42.578 07:53:48 -- common/autotest_common.sh@10 -- # set +x 00:07:42.578 [2024-07-13 07:53:48.345686] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:42.578 [2024-07-13 07:53:48.345887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53500 ] 00:07:42.837 [2024-07-13 07:53:48.492600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.837 [2024-07-13 07:53:48.542615] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:42.837 [2024-07-13 07:53:48.542832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.405 07:53:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:43.405 07:53:49 -- common/autotest_common.sh@852 -- # return 0 00:07:43.405 07:53:49 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:43.405 07:53:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.405 07:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:43.405 07:53:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.405 07:53:49 -- event/cpu_locks.sh@67 -- # no_locks 00:07:43.405 07:53:49 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:07:43.405 07:53:49 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:43.405 07:53:49 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:43.405 07:53:49 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:43.405 07:53:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.405 07:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:43.405 07:53:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.405 07:53:49 -- event/cpu_locks.sh@71 -- # locks_exist 53500 00:07:43.405 07:53:49 -- event/cpu_locks.sh@22 -- # lslocks -p 53500 00:07:43.405 07:53:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:44.342 07:53:50 -- event/cpu_locks.sh@73 -- # killprocess 53500 00:07:44.342 07:53:50 -- common/autotest_common.sh@926 -- # '[' -z 53500 ']' 00:07:44.342 07:53:50 -- common/autotest_common.sh@930 -- # kill -0 53500 00:07:44.342 07:53:50 -- common/autotest_common.sh@931 -- # uname 00:07:44.342 07:53:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:44.342 07:53:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 53500 00:07:44.600 killing process with pid 53500 00:07:44.600 07:53:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:44.600 07:53:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:44.600 07:53:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53500' 00:07:44.600 07:53:50 -- common/autotest_common.sh@945 -- # kill 53500 00:07:44.600 07:53:50 -- common/autotest_common.sh@950 -- # wait 53500 00:07:44.859 00:07:44.859 real 0m2.265s 00:07:44.859 user 0m2.408s 00:07:44.859 sys 0m1.090s 00:07:44.859 07:53:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.859 07:53:50 -- common/autotest_common.sh@10 -- # set +x 00:07:44.859 ************************************ 00:07:44.859 END TEST default_locks_via_rpc 00:07:44.859 ************************************ 00:07:44.859 07:53:50 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:44.859 07:53:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:44.859 07:53:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.859 07:53:50 -- common/autotest_common.sh@10 -- # set +x 00:07:44.859 ************************************ 00:07:44.859 START TEST non_locking_app_on_locked_coremask 00:07:44.859 ************************************ 00:07:44.859 07:53:50 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:07:44.859 07:53:50 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=53560 00:07:44.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.859 07:53:50 -- event/cpu_locks.sh@81 -- # waitforlisten 53560 /var/tmp/spdk.sock 00:07:44.859 07:53:50 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:44.859 07:53:50 -- common/autotest_common.sh@819 -- # '[' -z 53560 ']' 00:07:44.859 07:53:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.859 07:53:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:44.859 07:53:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.859 07:53:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:44.859 07:53:50 -- common/autotest_common.sh@10 -- # set +x 00:07:44.859 [2024-07-13 07:53:50.666324] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:44.859 [2024-07-13 07:53:50.666540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53560 ] 00:07:45.130 [2024-07-13 07:53:50.797811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.130 [2024-07-13 07:53:50.848569] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:45.130 [2024-07-13 07:53:50.848774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.705 07:53:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:45.705 07:53:51 -- common/autotest_common.sh@852 -- # return 0 00:07:45.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:45.705 07:53:51 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=53581 00:07:45.705 07:53:51 -- event/cpu_locks.sh@85 -- # waitforlisten 53581 /var/tmp/spdk2.sock 00:07:45.705 07:53:51 -- common/autotest_common.sh@819 -- # '[' -z 53581 ']' 00:07:45.705 07:53:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:45.705 07:53:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:45.705 07:53:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:45.705 07:53:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:45.705 07:53:51 -- common/autotest_common.sh@10 -- # set +x 00:07:45.705 07:53:51 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:45.965 [2024-07-13 07:53:51.578388] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:45.965 [2024-07-13 07:53:51.578595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53581 ] 00:07:45.965 [2024-07-13 07:53:51.732672] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:45.965 [2024-07-13 07:53:51.732756] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.224 [2024-07-13 07:53:51.843855] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:46.224 [2024-07-13 07:53:51.844071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.792 07:53:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:46.792 07:53:52 -- common/autotest_common.sh@852 -- # return 0 00:07:46.792 07:53:52 -- event/cpu_locks.sh@87 -- # locks_exist 53560 00:07:46.792 07:53:52 -- event/cpu_locks.sh@22 -- # lslocks -p 53560 00:07:46.792 07:53:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:48.692 07:53:54 -- event/cpu_locks.sh@89 -- # killprocess 53560 00:07:48.692 07:53:54 -- common/autotest_common.sh@926 -- # '[' -z 53560 ']' 00:07:48.692 07:53:54 -- common/autotest_common.sh@930 -- # kill -0 53560 00:07:48.692 07:53:54 -- common/autotest_common.sh@931 -- # uname 00:07:48.692 07:53:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:48.692 07:53:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 53560 00:07:48.692 killing process with pid 53560 00:07:48.692 07:53:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:48.692 07:53:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:48.692 07:53:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53560' 00:07:48.692 07:53:54 -- common/autotest_common.sh@945 -- # kill 53560 00:07:48.692 07:53:54 -- common/autotest_common.sh@950 -- # wait 53560 00:07:49.259 07:53:54 -- event/cpu_locks.sh@90 -- # killprocess 53581 00:07:49.259 07:53:54 -- common/autotest_common.sh@926 -- # '[' -z 53581 ']' 00:07:49.259 07:53:54 -- common/autotest_common.sh@930 -- # kill -0 53581 00:07:49.259 07:53:54 -- common/autotest_common.sh@931 -- # uname 00:07:49.259 07:53:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:49.259 07:53:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 53581 00:07:49.259 killing process with pid 53581 00:07:49.259 07:53:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:49.259 07:53:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:49.259 07:53:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53581' 00:07:49.259 07:53:54 -- common/autotest_common.sh@945 -- # kill 53581 00:07:49.259 07:53:54 -- common/autotest_common.sh@950 -- # wait 53581 00:07:49.518 ************************************ 00:07:49.518 END TEST non_locking_app_on_locked_coremask 00:07:49.518 ************************************ 00:07:49.518 00:07:49.518 real 0m4.591s 00:07:49.518 user 0m4.995s 00:07:49.518 sys 0m2.108s 00:07:49.518 07:53:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.518 07:53:55 -- common/autotest_common.sh@10 -- # set +x 00:07:49.518 07:53:55 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:49.518 07:53:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:49.518 07:53:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:49.518 07:53:55 -- common/autotest_common.sh@10 -- # set +x 00:07:49.518 ************************************ 00:07:49.518 START TEST locking_app_on_unlocked_coremask 00:07:49.518 ************************************ 00:07:49.518 07:53:55 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:07:49.518 07:53:55 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=53659 00:07:49.518 07:53:55 -- event/cpu_locks.sh@99 -- # waitforlisten 53659 /var/tmp/spdk.sock 00:07:49.518 07:53:55 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:49.518 07:53:55 -- common/autotest_common.sh@819 -- # '[' -z 53659 ']' 00:07:49.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.518 07:53:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.518 07:53:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:49.518 07:53:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.518 07:53:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:49.518 07:53:55 -- common/autotest_common.sh@10 -- # set +x 00:07:49.518 [2024-07-13 07:53:55.317851] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:49.518 [2024-07-13 07:53:55.318070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53659 ] 00:07:49.777 [2024-07-13 07:53:55.451524] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:49.777 [2024-07-13 07:53:55.451605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.777 [2024-07-13 07:53:55.501810] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:49.777 [2024-07-13 07:53:55.502014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:50.345 07:53:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:50.345 07:53:56 -- common/autotest_common.sh@852 -- # return 0 00:07:50.345 07:53:56 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=53680 00:07:50.345 07:53:56 -- event/cpu_locks.sh@103 -- # waitforlisten 53680 /var/tmp/spdk2.sock 00:07:50.345 07:53:56 -- common/autotest_common.sh@819 -- # '[' -z 53680 ']' 00:07:50.345 07:53:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:50.345 07:53:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:50.345 07:53:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:50.345 07:53:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:50.345 07:53:56 -- common/autotest_common.sh@10 -- # set +x 00:07:50.345 07:53:56 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:50.603 [2024-07-13 07:53:56.292986] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:50.603 [2024-07-13 07:53:56.293185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53680 ] 00:07:50.862 [2024-07-13 07:53:56.438068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.862 [2024-07-13 07:53:56.537157] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:50.862 [2024-07-13 07:53:56.537382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.429 07:53:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:51.429 07:53:57 -- common/autotest_common.sh@852 -- # return 0 00:07:51.429 07:53:57 -- event/cpu_locks.sh@105 -- # locks_exist 53680 00:07:51.429 07:53:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:51.429 07:53:57 -- event/cpu_locks.sh@22 -- # lslocks -p 53680 00:07:53.333 07:53:58 -- event/cpu_locks.sh@107 -- # killprocess 53659 00:07:53.333 07:53:58 -- common/autotest_common.sh@926 -- # '[' -z 53659 ']' 00:07:53.333 07:53:58 -- common/autotest_common.sh@930 -- # kill -0 53659 00:07:53.333 07:53:58 -- common/autotest_common.sh@931 -- # uname 00:07:53.333 07:53:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:53.333 07:53:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 53659 00:07:53.333 07:53:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:53.333 07:53:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:53.333 07:53:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53659' 00:07:53.333 killing process with pid 53659 00:07:53.333 07:53:58 -- common/autotest_common.sh@945 -- # kill 53659 00:07:53.333 07:53:58 -- common/autotest_common.sh@950 -- # wait 53659 00:07:53.899 07:53:59 -- event/cpu_locks.sh@108 -- # killprocess 53680 00:07:53.899 07:53:59 -- common/autotest_common.sh@926 -- # '[' -z 53680 ']' 00:07:53.899 07:53:59 -- common/autotest_common.sh@930 -- # kill -0 53680 00:07:53.899 07:53:59 -- common/autotest_common.sh@931 -- # uname 00:07:53.899 07:53:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:53.899 07:53:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 53680 00:07:53.899 killing process with pid 53680 00:07:53.899 07:53:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:53.899 07:53:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:53.899 07:53:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53680' 00:07:53.899 07:53:59 -- common/autotest_common.sh@945 -- # kill 53680 00:07:53.899 07:53:59 -- common/autotest_common.sh@950 -- # wait 53680 00:07:54.158 ************************************ 00:07:54.158 END TEST locking_app_on_unlocked_coremask 00:07:54.158 ************************************ 00:07:54.158 00:07:54.158 real 0m4.616s 00:07:54.158 user 0m5.080s 00:07:54.158 sys 0m2.039s 00:07:54.158 07:53:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.158 07:53:59 -- common/autotest_common.sh@10 -- # set +x 00:07:54.158 07:53:59 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:54.158 07:53:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:54.158 07:53:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:54.158 07:53:59 -- common/autotest_common.sh@10 -- # set +x 00:07:54.158 ************************************ 00:07:54.158 START TEST locking_app_on_locked_coremask 00:07:54.158 ************************************ 00:07:54.158 07:53:59 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:07:54.158 07:53:59 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=53766 00:07:54.158 07:53:59 -- event/cpu_locks.sh@116 -- # waitforlisten 53766 /var/tmp/spdk.sock 00:07:54.158 07:53:59 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:54.158 07:53:59 -- common/autotest_common.sh@819 -- # '[' -z 53766 ']' 00:07:54.158 07:53:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.158 07:53:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:54.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.158 07:53:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.158 07:53:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:54.158 07:53:59 -- common/autotest_common.sh@10 -- # set +x 00:07:54.416 [2024-07-13 07:54:00.008663] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:54.416 [2024-07-13 07:54:00.008856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53766 ] 00:07:54.416 [2024-07-13 07:54:00.158482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.416 [2024-07-13 07:54:00.209524] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:54.416 [2024-07-13 07:54:00.209722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.983 07:54:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:54.983 07:54:00 -- common/autotest_common.sh@852 -- # return 0 00:07:54.983 07:54:00 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=53787 00:07:54.983 07:54:00 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 53787 /var/tmp/spdk2.sock 00:07:54.983 07:54:00 -- common/autotest_common.sh@640 -- # local es=0 00:07:54.983 07:54:00 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 53787 /var/tmp/spdk2.sock 00:07:54.983 07:54:00 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:54.983 07:54:00 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:54.983 07:54:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:54.983 07:54:00 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:54.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:54.983 07:54:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:54.983 07:54:00 -- common/autotest_common.sh@643 -- # waitforlisten 53787 /var/tmp/spdk2.sock 00:07:54.983 07:54:00 -- common/autotest_common.sh@819 -- # '[' -z 53787 ']' 00:07:54.983 07:54:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:54.983 07:54:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:54.984 07:54:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:54.984 07:54:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:54.984 07:54:00 -- common/autotest_common.sh@10 -- # set +x 00:07:55.241 [2024-07-13 07:54:00.929784] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:55.241 [2024-07-13 07:54:00.929994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53787 ] 00:07:55.499 [2024-07-13 07:54:01.074563] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 53766 has claimed it. 00:07:55.499 [2024-07-13 07:54:01.074643] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:55.756 ERROR: process (pid: 53787) is no longer running 00:07:55.757 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (53787) - No such process 00:07:55.757 07:54:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:55.757 07:54:01 -- common/autotest_common.sh@852 -- # return 1 00:07:55.757 07:54:01 -- common/autotest_common.sh@643 -- # es=1 00:07:55.757 07:54:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:55.757 07:54:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:55.757 07:54:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:55.757 07:54:01 -- event/cpu_locks.sh@122 -- # locks_exist 53766 00:07:55.757 07:54:01 -- event/cpu_locks.sh@22 -- # lslocks -p 53766 00:07:55.757 07:54:01 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:56.689 07:54:02 -- event/cpu_locks.sh@124 -- # killprocess 53766 00:07:56.689 07:54:02 -- common/autotest_common.sh@926 -- # '[' -z 53766 ']' 00:07:56.689 07:54:02 -- common/autotest_common.sh@930 -- # kill -0 53766 00:07:56.689 07:54:02 -- common/autotest_common.sh@931 -- # uname 00:07:56.689 07:54:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:56.689 07:54:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 53766 00:07:56.689 killing process with pid 53766 00:07:56.689 07:54:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:56.689 07:54:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:56.689 07:54:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53766' 00:07:56.689 07:54:02 -- common/autotest_common.sh@945 -- # kill 53766 00:07:56.689 07:54:02 -- common/autotest_common.sh@950 -- # wait 53766 00:07:56.946 00:07:56.946 real 0m2.897s 00:07:56.946 user 0m3.151s 00:07:56.946 sys 0m1.144s 00:07:56.946 07:54:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.946 ************************************ 00:07:56.946 END TEST locking_app_on_locked_coremask 00:07:56.946 ************************************ 00:07:56.946 07:54:02 -- common/autotest_common.sh@10 -- # set +x 00:07:57.233 07:54:02 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:57.233 07:54:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:57.233 07:54:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.233 07:54:02 -- common/autotest_common.sh@10 -- # set +x 00:07:57.233 ************************************ 00:07:57.233 START TEST locking_overlapped_coremask 00:07:57.233 ************************************ 00:07:57.233 07:54:02 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:07:57.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.233 07:54:02 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=53849 00:07:57.233 07:54:02 -- event/cpu_locks.sh@133 -- # waitforlisten 53849 /var/tmp/spdk.sock 00:07:57.233 07:54:02 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:57.233 07:54:02 -- common/autotest_common.sh@819 -- # '[' -z 53849 ']' 00:07:57.233 07:54:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.233 07:54:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:57.233 07:54:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.233 07:54:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:57.233 07:54:02 -- common/autotest_common.sh@10 -- # set +x 00:07:57.233 [2024-07-13 07:54:02.946502] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:57.233 [2024-07-13 07:54:02.946698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53849 ] 00:07:57.491 [2024-07-13 07:54:03.079877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:57.491 [2024-07-13 07:54:03.132455] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:57.491 [2024-07-13 07:54:03.132887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.491 [2024-07-13 07:54:03.133102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.491 [2024-07-13 07:54:03.133100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.056 07:54:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:58.056 07:54:03 -- common/autotest_common.sh@852 -- # return 0 00:07:58.056 07:54:03 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=53866 00:07:58.056 07:54:03 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 53866 /var/tmp/spdk2.sock 00:07:58.056 07:54:03 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:58.056 07:54:03 -- common/autotest_common.sh@640 -- # local es=0 00:07:58.056 07:54:03 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 53866 /var/tmp/spdk2.sock 00:07:58.056 07:54:03 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:58.056 07:54:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:58.056 07:54:03 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:58.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:58.056 07:54:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:58.056 07:54:03 -- common/autotest_common.sh@643 -- # waitforlisten 53866 /var/tmp/spdk2.sock 00:07:58.056 07:54:03 -- common/autotest_common.sh@819 -- # '[' -z 53866 ']' 00:07:58.056 07:54:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:58.056 07:54:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:58.056 07:54:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:58.056 07:54:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:58.056 07:54:03 -- common/autotest_common.sh@10 -- # set +x 00:07:58.314 [2024-07-13 07:54:03.878121] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:58.314 [2024-07-13 07:54:03.878335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53866 ] 00:07:58.314 [2024-07-13 07:54:04.057758] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 53849 has claimed it. 00:07:58.314 [2024-07-13 07:54:04.057855] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:58.878 ERROR: process (pid: 53866) is no longer running 00:07:58.878 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (53866) - No such process 00:07:58.878 07:54:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:58.878 07:54:04 -- common/autotest_common.sh@852 -- # return 1 00:07:58.878 07:54:04 -- common/autotest_common.sh@643 -- # es=1 00:07:58.878 07:54:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:58.878 07:54:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:58.878 07:54:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:58.878 07:54:04 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:58.878 07:54:04 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:58.878 07:54:04 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:58.878 07:54:04 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:58.878 07:54:04 -- event/cpu_locks.sh@141 -- # killprocess 53849 00:07:58.878 07:54:04 -- common/autotest_common.sh@926 -- # '[' -z 53849 ']' 00:07:58.878 07:54:04 -- common/autotest_common.sh@930 -- # kill -0 53849 00:07:58.878 07:54:04 -- common/autotest_common.sh@931 -- # uname 00:07:58.878 07:54:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:58.878 07:54:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 53849 00:07:58.878 killing process with pid 53849 00:07:58.878 07:54:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:58.878 07:54:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:58.878 07:54:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53849' 00:07:58.878 07:54:04 -- common/autotest_common.sh@945 -- # kill 53849 00:07:58.878 07:54:04 -- common/autotest_common.sh@950 -- # wait 53849 00:07:59.135 ************************************ 00:07:59.135 END TEST locking_overlapped_coremask 00:07:59.135 ************************************ 00:07:59.135 00:07:59.135 real 0m1.999s 00:07:59.135 user 0m5.251s 00:07:59.135 sys 0m0.431s 00:07:59.135 07:54:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.135 07:54:04 -- common/autotest_common.sh@10 -- # set +x 00:07:59.135 07:54:04 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:59.135 07:54:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:59.135 07:54:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.135 07:54:04 -- common/autotest_common.sh@10 -- # set +x 00:07:59.135 ************************************ 00:07:59.135 START TEST locking_overlapped_coremask_via_rpc 00:07:59.135 ************************************ 00:07:59.135 07:54:04 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:07:59.135 07:54:04 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=53911 00:07:59.135 07:54:04 -- event/cpu_locks.sh@149 -- # waitforlisten 53911 /var/tmp/spdk.sock 00:07:59.135 07:54:04 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:59.135 07:54:04 -- common/autotest_common.sh@819 -- # '[' -z 53911 ']' 00:07:59.135 07:54:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.135 07:54:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:59.135 07:54:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.135 07:54:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:59.135 07:54:04 -- common/autotest_common.sh@10 -- # set +x 00:07:59.392 [2024-07-13 07:54:05.011676] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:59.392 [2024-07-13 07:54:05.011862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53911 ] 00:07:59.392 [2024-07-13 07:54:05.150193] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:59.392 [2024-07-13 07:54:05.150277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:59.392 [2024-07-13 07:54:05.202844] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:59.392 [2024-07-13 07:54:05.203229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.392 [2024-07-13 07:54:05.203472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.392 [2024-07-13 07:54:05.203498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:00.327 07:54:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:00.327 07:54:05 -- common/autotest_common.sh@852 -- # return 0 00:08:00.327 07:54:05 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=53934 00:08:00.327 07:54:05 -- event/cpu_locks.sh@153 -- # waitforlisten 53934 /var/tmp/spdk2.sock 00:08:00.327 07:54:05 -- common/autotest_common.sh@819 -- # '[' -z 53934 ']' 00:08:00.327 07:54:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:00.327 07:54:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:00.327 07:54:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:00.327 07:54:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:00.327 07:54:05 -- common/autotest_common.sh@10 -- # set +x 00:08:00.327 07:54:05 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:00.327 [2024-07-13 07:54:06.017010] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:00.327 [2024-07-13 07:54:06.017220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53934 ] 00:08:00.586 [2024-07-13 07:54:06.192842] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:00.586 [2024-07-13 07:54:06.192915] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:00.586 [2024-07-13 07:54:06.284111] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:00.586 [2024-07-13 07:54:06.284728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.586 [2024-07-13 07:54:06.294650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:00.586 [2024-07-13 07:54:06.305481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.963 07:54:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:01.963 07:54:07 -- common/autotest_common.sh@852 -- # return 0 00:08:01.963 07:54:07 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:01.963 07:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:01.963 07:54:07 -- common/autotest_common.sh@10 -- # set +x 00:08:01.963 07:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:01.963 07:54:07 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:01.963 07:54:07 -- common/autotest_common.sh@640 -- # local es=0 00:08:01.963 07:54:07 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:01.963 07:54:07 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:08:01.963 07:54:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:01.963 07:54:07 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:08:01.963 07:54:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:01.963 07:54:07 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:01.963 07:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:01.963 07:54:07 -- common/autotest_common.sh@10 -- # set +x 00:08:01.963 [2024-07-13 07:54:07.499672] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 53911 has claimed it. 00:08:01.963 request: 00:08:01.963 { 00:08:01.963 "method": "framework_enable_cpumask_locks", 00:08:01.963 "req_id": 1 00:08:01.963 } 00:08:01.963 Got JSON-RPC error response 00:08:01.963 response: 00:08:01.963 { 00:08:01.963 "code": -32603, 00:08:01.963 "message": "Failed to claim CPU core: 2" 00:08:01.963 } 00:08:01.963 07:54:07 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:08:01.963 07:54:07 -- common/autotest_common.sh@643 -- # es=1 00:08:01.963 07:54:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:01.963 07:54:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:01.963 07:54:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:01.963 07:54:07 -- event/cpu_locks.sh@158 -- # waitforlisten 53911 /var/tmp/spdk.sock 00:08:01.963 07:54:07 -- common/autotest_common.sh@819 -- # '[' -z 53911 ']' 00:08:01.963 07:54:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.963 07:54:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:01.963 07:54:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.963 07:54:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:01.963 07:54:07 -- common/autotest_common.sh@10 -- # set +x 00:08:01.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:01.963 07:54:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:01.963 07:54:07 -- common/autotest_common.sh@852 -- # return 0 00:08:01.963 07:54:07 -- event/cpu_locks.sh@159 -- # waitforlisten 53934 /var/tmp/spdk2.sock 00:08:01.963 07:54:07 -- common/autotest_common.sh@819 -- # '[' -z 53934 ']' 00:08:01.963 07:54:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:01.963 07:54:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:01.963 07:54:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:01.963 07:54:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:01.963 07:54:07 -- common/autotest_common.sh@10 -- # set +x 00:08:02.223 07:54:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:02.223 07:54:07 -- common/autotest_common.sh@852 -- # return 0 00:08:02.223 07:54:07 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:02.223 07:54:07 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:02.223 07:54:07 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:02.223 ************************************ 00:08:02.223 END TEST locking_overlapped_coremask_via_rpc 00:08:02.223 ************************************ 00:08:02.223 07:54:07 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:02.223 00:08:02.223 real 0m3.028s 00:08:02.223 user 0m1.307s 00:08:02.223 sys 0m0.166s 00:08:02.223 07:54:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.223 07:54:07 -- common/autotest_common.sh@10 -- # set +x 00:08:02.223 07:54:07 -- event/cpu_locks.sh@174 -- # cleanup 00:08:02.223 07:54:07 -- event/cpu_locks.sh@15 -- # [[ -z 53911 ]] 00:08:02.223 07:54:07 -- event/cpu_locks.sh@15 -- # killprocess 53911 00:08:02.223 07:54:07 -- common/autotest_common.sh@926 -- # '[' -z 53911 ']' 00:08:02.223 07:54:07 -- common/autotest_common.sh@930 -- # kill -0 53911 00:08:02.223 07:54:07 -- common/autotest_common.sh@931 -- # uname 00:08:02.223 07:54:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:02.223 07:54:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 53911 00:08:02.223 killing process with pid 53911 00:08:02.223 07:54:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:02.223 07:54:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:02.223 07:54:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53911' 00:08:02.223 07:54:07 -- common/autotest_common.sh@945 -- # kill 53911 00:08:02.223 07:54:07 -- common/autotest_common.sh@950 -- # wait 53911 00:08:02.483 07:54:08 -- event/cpu_locks.sh@16 -- # [[ -z 53934 ]] 00:08:02.483 07:54:08 -- event/cpu_locks.sh@16 -- # killprocess 53934 00:08:02.483 07:54:08 -- common/autotest_common.sh@926 -- # '[' -z 53934 ']' 00:08:02.483 07:54:08 -- common/autotest_common.sh@930 -- # kill -0 53934 00:08:02.483 07:54:08 -- common/autotest_common.sh@931 -- # uname 00:08:02.483 07:54:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:02.483 07:54:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 53934 00:08:02.742 killing process with pid 53934 00:08:02.742 07:54:08 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:08:02.742 07:54:08 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:08:02.742 07:54:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53934' 00:08:02.742 07:54:08 -- common/autotest_common.sh@945 -- # kill 53934 00:08:02.742 07:54:08 -- common/autotest_common.sh@950 -- # wait 53934 00:08:03.001 07:54:08 -- event/cpu_locks.sh@18 -- # rm -f 00:08:03.001 Process with pid 53911 is not found 00:08:03.001 Process with pid 53934 is not found 00:08:03.001 07:54:08 -- event/cpu_locks.sh@1 -- # cleanup 00:08:03.001 07:54:08 -- event/cpu_locks.sh@15 -- # [[ -z 53911 ]] 00:08:03.001 07:54:08 -- event/cpu_locks.sh@15 -- # killprocess 53911 00:08:03.001 07:54:08 -- common/autotest_common.sh@926 -- # '[' -z 53911 ']' 00:08:03.001 07:54:08 -- common/autotest_common.sh@930 -- # kill -0 53911 00:08:03.001 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (53911) - No such process 00:08:03.001 07:54:08 -- common/autotest_common.sh@953 -- # echo 'Process with pid 53911 is not found' 00:08:03.001 07:54:08 -- event/cpu_locks.sh@16 -- # [[ -z 53934 ]] 00:08:03.001 07:54:08 -- event/cpu_locks.sh@16 -- # killprocess 53934 00:08:03.001 07:54:08 -- common/autotest_common.sh@926 -- # '[' -z 53934 ']' 00:08:03.001 07:54:08 -- common/autotest_common.sh@930 -- # kill -0 53934 00:08:03.001 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (53934) - No such process 00:08:03.001 07:54:08 -- common/autotest_common.sh@953 -- # echo 'Process with pid 53934 is not found' 00:08:03.001 07:54:08 -- event/cpu_locks.sh@18 -- # rm -f 00:08:03.001 ************************************ 00:08:03.001 END TEST cpu_locks 00:08:03.001 ************************************ 00:08:03.001 00:08:03.001 real 0m22.686s 00:08:03.001 user 0m38.088s 00:08:03.001 sys 0m8.870s 00:08:03.001 07:54:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.001 07:54:08 -- common/autotest_common.sh@10 -- # set +x 00:08:03.001 ************************************ 00:08:03.001 END TEST event 00:08:03.001 ************************************ 00:08:03.001 00:08:03.001 real 0m30.074s 00:08:03.001 user 0m49.871s 00:08:03.001 sys 0m9.691s 00:08:03.001 07:54:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.001 07:54:08 -- common/autotest_common.sh@10 -- # set +x 00:08:03.001 07:54:08 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:03.001 07:54:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:03.001 07:54:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:03.001 07:54:08 -- common/autotest_common.sh@10 -- # set +x 00:08:03.001 ************************************ 00:08:03.001 START TEST thread 00:08:03.001 ************************************ 00:08:03.001 07:54:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:03.001 * Looking for test storage... 00:08:03.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:03.001 07:54:08 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:03.001 07:54:08 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:08:03.001 07:54:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:03.001 07:54:08 -- common/autotest_common.sh@10 -- # set +x 00:08:03.261 ************************************ 00:08:03.261 START TEST thread_poller_perf 00:08:03.261 ************************************ 00:08:03.261 07:54:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:03.261 [2024-07-13 07:54:08.849214] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:03.261 [2024-07-13 07:54:08.849758] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54082 ] 00:08:03.261 [2024-07-13 07:54:08.999851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.261 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:03.261 [2024-07-13 07:54:09.051557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.639 ====================================== 00:08:04.639 busy:2106020924 (cyc) 00:08:04.639 total_run_count: 1475000 00:08:04.639 tsc_hz: 2100000000 (cyc) 00:08:04.639 ====================================== 00:08:04.639 poller_cost: 1427 (cyc), 679 (nsec) 00:08:04.639 00:08:04.639 real 0m1.314s 00:08:04.639 user 0m1.127s 00:08:04.639 sys 0m0.087s 00:08:04.639 07:54:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.639 ************************************ 00:08:04.639 END TEST thread_poller_perf 00:08:04.639 ************************************ 00:08:04.639 07:54:10 -- common/autotest_common.sh@10 -- # set +x 00:08:04.639 07:54:10 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:04.639 07:54:10 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:08:04.639 07:54:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:04.639 07:54:10 -- common/autotest_common.sh@10 -- # set +x 00:08:04.639 ************************************ 00:08:04.639 START TEST thread_poller_perf 00:08:04.639 ************************************ 00:08:04.639 07:54:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:04.639 [2024-07-13 07:54:10.211090] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:04.639 [2024-07-13 07:54:10.211391] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54120 ] 00:08:04.639 [2024-07-13 07:54:10.359503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.639 [2024-07-13 07:54:10.410580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.639 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:06.017 ====================================== 00:08:06.017 busy:2104326770 (cyc) 00:08:06.017 total_run_count: 16085000 00:08:06.017 tsc_hz: 2100000000 (cyc) 00:08:06.017 ====================================== 00:08:06.017 poller_cost: 130 (cyc), 61 (nsec) 00:08:06.017 00:08:06.017 real 0m1.307s 00:08:06.017 user 0m1.126s 00:08:06.017 sys 0m0.080s 00:08:06.017 07:54:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.017 07:54:11 -- common/autotest_common.sh@10 -- # set +x 00:08:06.017 ************************************ 00:08:06.017 END TEST thread_poller_perf 00:08:06.017 ************************************ 00:08:06.017 07:54:11 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:08:06.017 07:54:11 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:08:06.017 07:54:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:06.017 07:54:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:06.017 07:54:11 -- common/autotest_common.sh@10 -- # set +x 00:08:06.017 ************************************ 00:08:06.017 START TEST thread_spdk_lock 00:08:06.017 ************************************ 00:08:06.017 07:54:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:08:06.017 [2024-07-13 07:54:11.569759] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:06.017 [2024-07-13 07:54:11.570044] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54161 ] 00:08:06.017 [2024-07-13 07:54:11.729259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:06.017 [2024-07-13 07:54:11.782129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.017 [2024-07-13 07:54:11.782132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.583 [2024-07-13 07:54:12.260805] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:06.583 [2024-07-13 07:54:12.260917] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:08:06.583 [2024-07-13 07:54:12.260946] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x6aef40 00:08:06.583 [2024-07-13 07:54:12.262226] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:06.583 [2024-07-13 07:54:12.262328] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:06.583 [2024-07-13 07:54:12.262358] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:06.583 Starting test contend 00:08:06.583 Worker Delay Wait us Hold us Total us 00:08:06.583 0 3 179637 178939 358576 00:08:06.583 1 5 92561 281025 373587 00:08:06.583 PASS test contend 00:08:06.583 Starting test hold_by_poller 00:08:06.583 PASS test hold_by_poller 00:08:06.583 Starting test hold_by_message 00:08:06.583 PASS test hold_by_message 00:08:06.583 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:08:06.583 100014 assertions passed 00:08:06.583 0 assertions failed 00:08:06.583 ************************************ 00:08:06.583 END TEST thread_spdk_lock 00:08:06.583 ************************************ 00:08:06.583 00:08:06.583 real 0m0.800s 00:08:06.583 user 0m1.089s 00:08:06.583 sys 0m0.091s 00:08:06.584 07:54:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.584 07:54:12 -- common/autotest_common.sh@10 -- # set +x 00:08:06.584 ************************************ 00:08:06.584 END TEST thread 00:08:06.584 ************************************ 00:08:06.584 00:08:06.584 real 0m3.662s 00:08:06.584 user 0m3.438s 00:08:06.584 sys 0m0.405s 00:08:06.584 07:54:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.584 07:54:12 -- common/autotest_common.sh@10 -- # set +x 00:08:06.842 07:54:12 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:08:06.842 07:54:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:06.842 07:54:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:06.842 07:54:12 -- common/autotest_common.sh@10 -- # set +x 00:08:06.842 ************************************ 00:08:06.842 START TEST accel 00:08:06.842 ************************************ 00:08:06.842 07:54:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:08:06.842 * Looking for test storage... 00:08:06.842 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:06.842 07:54:12 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:08:06.842 07:54:12 -- accel/accel.sh@74 -- # get_expected_opcs 00:08:06.842 07:54:12 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:06.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.842 07:54:12 -- accel/accel.sh@59 -- # spdk_tgt_pid=54250 00:08:06.842 07:54:12 -- accel/accel.sh@60 -- # waitforlisten 54250 00:08:06.842 07:54:12 -- common/autotest_common.sh@819 -- # '[' -z 54250 ']' 00:08:06.842 07:54:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.842 07:54:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:06.842 07:54:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.842 07:54:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:06.842 07:54:12 -- common/autotest_common.sh@10 -- # set +x 00:08:06.842 07:54:12 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:06.842 07:54:12 -- accel/accel.sh@58 -- # build_accel_config 00:08:06.842 07:54:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:06.842 07:54:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.842 07:54:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.842 07:54:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:06.842 07:54:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:06.842 07:54:12 -- accel/accel.sh@41 -- # local IFS=, 00:08:06.842 07:54:12 -- accel/accel.sh@42 -- # jq -r . 00:08:06.842 [2024-07-13 07:54:12.647000] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:06.842 [2024-07-13 07:54:12.647218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54250 ] 00:08:07.099 [2024-07-13 07:54:12.787757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.099 [2024-07-13 07:54:12.838546] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:07.099 [2024-07-13 07:54:12.838776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.683 07:54:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:07.683 07:54:13 -- common/autotest_common.sh@852 -- # return 0 00:08:07.683 07:54:13 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:07.683 07:54:13 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:08:07.683 07:54:13 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:07.683 07:54:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:07.683 07:54:13 -- common/autotest_common.sh@10 -- # set +x 00:08:07.683 07:54:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:07.941 07:54:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # IFS== 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # read -r opc module 00:08:07.941 07:54:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:07.941 07:54:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # IFS== 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # read -r opc module 00:08:07.941 07:54:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:07.941 07:54:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # IFS== 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # read -r opc module 00:08:07.941 07:54:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:07.941 07:54:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # IFS== 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # read -r opc module 00:08:07.941 07:54:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:07.941 07:54:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # IFS== 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # read -r opc module 00:08:07.941 07:54:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:07.941 07:54:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # IFS== 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # read -r opc module 00:08:07.941 07:54:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:07.941 07:54:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # IFS== 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # read -r opc module 00:08:07.941 07:54:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:07.941 07:54:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # IFS== 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # read -r opc module 00:08:07.941 07:54:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:07.941 07:54:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # IFS== 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # read -r opc module 00:08:07.941 07:54:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:07.941 07:54:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # IFS== 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # read -r opc module 00:08:07.941 07:54:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:07.941 07:54:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # IFS== 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # read -r opc module 00:08:07.941 07:54:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:07.941 07:54:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # IFS== 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # read -r opc module 00:08:07.941 07:54:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:07.941 07:54:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # IFS== 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # read -r opc module 00:08:07.941 07:54:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:07.941 07:54:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # IFS== 00:08:07.941 07:54:13 -- accel/accel.sh@64 -- # read -r opc module 00:08:07.941 07:54:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:07.941 07:54:13 -- accel/accel.sh@67 -- # killprocess 54250 00:08:07.941 07:54:13 -- common/autotest_common.sh@926 -- # '[' -z 54250 ']' 00:08:07.941 07:54:13 -- common/autotest_common.sh@930 -- # kill -0 54250 00:08:07.941 07:54:13 -- common/autotest_common.sh@931 -- # uname 00:08:07.941 07:54:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:07.941 07:54:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54250 00:08:07.941 killing process with pid 54250 00:08:07.941 07:54:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:07.941 07:54:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:07.941 07:54:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54250' 00:08:07.941 07:54:13 -- common/autotest_common.sh@945 -- # kill 54250 00:08:07.941 07:54:13 -- common/autotest_common.sh@950 -- # wait 54250 00:08:08.199 07:54:13 -- accel/accel.sh@68 -- # trap - ERR 00:08:08.199 07:54:13 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:08:08.199 07:54:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:08.199 07:54:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:08.199 07:54:13 -- common/autotest_common.sh@10 -- # set +x 00:08:08.199 07:54:13 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:08:08.199 07:54:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:08.199 07:54:13 -- accel/accel.sh@12 -- # build_accel_config 00:08:08.199 07:54:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:08.199 07:54:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.199 07:54:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.199 07:54:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:08.199 07:54:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:08.199 07:54:13 -- accel/accel.sh@41 -- # local IFS=, 00:08:08.199 07:54:13 -- accel/accel.sh@42 -- # jq -r . 00:08:08.457 07:54:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.457 07:54:14 -- common/autotest_common.sh@10 -- # set +x 00:08:08.457 07:54:14 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:08.457 07:54:14 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:08.457 07:54:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:08.457 07:54:14 -- common/autotest_common.sh@10 -- # set +x 00:08:08.457 ************************************ 00:08:08.457 START TEST accel_missing_filename 00:08:08.458 ************************************ 00:08:08.458 07:54:14 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:08:08.458 07:54:14 -- common/autotest_common.sh@640 -- # local es=0 00:08:08.458 07:54:14 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:08.458 07:54:14 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:08:08.458 07:54:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:08.458 07:54:14 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:08:08.458 07:54:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:08.458 07:54:14 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:08:08.458 07:54:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:08.458 07:54:14 -- accel/accel.sh@12 -- # build_accel_config 00:08:08.458 07:54:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:08.458 07:54:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.458 07:54:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.458 07:54:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:08.458 07:54:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:08.458 07:54:14 -- accel/accel.sh@41 -- # local IFS=, 00:08:08.458 07:54:14 -- accel/accel.sh@42 -- # jq -r . 00:08:08.458 [2024-07-13 07:54:14.223549] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:08.458 [2024-07-13 07:54:14.223726] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54317 ] 00:08:08.716 [2024-07-13 07:54:14.354609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.716 [2024-07-13 07:54:14.404861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.716 [2024-07-13 07:54:14.452641] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.716 [2024-07-13 07:54:14.513387] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:08:08.975 A filename is required. 00:08:08.975 07:54:14 -- common/autotest_common.sh@643 -- # es=234 00:08:08.975 07:54:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:08.975 07:54:14 -- common/autotest_common.sh@652 -- # es=106 00:08:08.975 07:54:14 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:08.975 07:54:14 -- common/autotest_common.sh@660 -- # es=1 00:08:08.975 07:54:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:08.975 00:08:08.975 real 0m0.501s 00:08:08.975 user 0m0.220s 00:08:08.975 sys 0m0.141s 00:08:08.975 07:54:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.975 07:54:14 -- common/autotest_common.sh@10 -- # set +x 00:08:08.975 ************************************ 00:08:08.975 END TEST accel_missing_filename 00:08:08.975 ************************************ 00:08:08.975 07:54:14 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:08.975 07:54:14 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:08:08.975 07:54:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:08.975 07:54:14 -- common/autotest_common.sh@10 -- # set +x 00:08:08.975 ************************************ 00:08:08.975 START TEST accel_compress_verify 00:08:08.975 ************************************ 00:08:08.975 07:54:14 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:08.975 07:54:14 -- common/autotest_common.sh@640 -- # local es=0 00:08:08.975 07:54:14 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:08.975 07:54:14 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:08:08.975 07:54:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:08.975 07:54:14 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:08:08.975 07:54:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:08.975 07:54:14 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:08.975 07:54:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:08.975 07:54:14 -- accel/accel.sh@12 -- # build_accel_config 00:08:08.975 07:54:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:08.975 07:54:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.975 07:54:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.975 07:54:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:08.975 07:54:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:08.975 07:54:14 -- accel/accel.sh@41 -- # local IFS=, 00:08:08.975 07:54:14 -- accel/accel.sh@42 -- # jq -r . 00:08:08.975 [2024-07-13 07:54:14.767482] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:08.975 [2024-07-13 07:54:14.767678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54349 ] 00:08:09.233 [2024-07-13 07:54:14.911232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.233 [2024-07-13 07:54:14.963006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.233 [2024-07-13 07:54:15.010840] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.492 [2024-07-13 07:54:15.071992] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:08:09.492 00:08:09.492 Compression does not support the verify option, aborting. 00:08:09.492 ************************************ 00:08:09.492 END TEST accel_compress_verify 00:08:09.492 ************************************ 00:08:09.492 07:54:15 -- common/autotest_common.sh@643 -- # es=161 00:08:09.492 07:54:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:09.492 07:54:15 -- common/autotest_common.sh@652 -- # es=33 00:08:09.492 07:54:15 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:09.492 07:54:15 -- common/autotest_common.sh@660 -- # es=1 00:08:09.492 07:54:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:09.492 00:08:09.492 real 0m0.514s 00:08:09.492 user 0m0.232s 00:08:09.492 sys 0m0.138s 00:08:09.492 07:54:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.492 07:54:15 -- common/autotest_common.sh@10 -- # set +x 00:08:09.492 07:54:15 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:09.492 07:54:15 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:09.492 07:54:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.492 07:54:15 -- common/autotest_common.sh@10 -- # set +x 00:08:09.492 ************************************ 00:08:09.492 START TEST accel_wrong_workload 00:08:09.492 ************************************ 00:08:09.492 07:54:15 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:08:09.492 07:54:15 -- common/autotest_common.sh@640 -- # local es=0 00:08:09.492 07:54:15 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:09.492 07:54:15 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:08:09.492 07:54:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:09.492 07:54:15 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:08:09.492 07:54:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:09.492 07:54:15 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:08:09.492 07:54:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:09.492 07:54:15 -- accel/accel.sh@12 -- # build_accel_config 00:08:09.492 07:54:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:09.492 07:54:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.492 07:54:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.492 07:54:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:09.492 07:54:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:09.492 07:54:15 -- accel/accel.sh@41 -- # local IFS=, 00:08:09.492 07:54:15 -- accel/accel.sh@42 -- # jq -r . 00:08:09.750 Unsupported workload type: foobar 00:08:09.750 [2024-07-13 07:54:15.342269] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:09.750 accel_perf options: 00:08:09.750 [-h help message] 00:08:09.750 [-q queue depth per core] 00:08:09.750 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:09.750 [-T number of threads per core 00:08:09.750 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:09.750 [-t time in seconds] 00:08:09.751 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:09.751 [ dif_verify, , dif_generate, dif_generate_copy 00:08:09.751 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:09.751 [-l for compress/decompress workloads, name of uncompressed input file 00:08:09.751 [-S for crc32c workload, use this seed value (default 0) 00:08:09.751 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:09.751 [-f for fill workload, use this BYTE value (default 255) 00:08:09.751 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:09.751 [-y verify result if this switch is on] 00:08:09.751 [-a tasks to allocate per core (default: same value as -q)] 00:08:09.751 Can be used to spread operations across a wider range of memory. 00:08:09.751 ************************************ 00:08:09.751 END TEST accel_wrong_workload 00:08:09.751 ************************************ 00:08:09.751 07:54:15 -- common/autotest_common.sh@643 -- # es=1 00:08:09.751 07:54:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:09.751 07:54:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:09.751 07:54:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:09.751 00:08:09.751 real 0m0.154s 00:08:09.751 user 0m0.074s 00:08:09.751 sys 0m0.042s 00:08:09.751 07:54:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.751 07:54:15 -- common/autotest_common.sh@10 -- # set +x 00:08:09.751 07:54:15 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:09.751 07:54:15 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:08:09.751 07:54:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.751 07:54:15 -- common/autotest_common.sh@10 -- # set +x 00:08:09.751 ************************************ 00:08:09.751 START TEST accel_negative_buffers 00:08:09.751 ************************************ 00:08:09.751 07:54:15 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:09.751 07:54:15 -- common/autotest_common.sh@640 -- # local es=0 00:08:09.751 07:54:15 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:09.751 07:54:15 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:08:09.751 07:54:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:09.751 07:54:15 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:08:09.751 07:54:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:09.751 07:54:15 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:08:09.751 07:54:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:09.751 07:54:15 -- accel/accel.sh@12 -- # build_accel_config 00:08:09.751 07:54:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:09.751 07:54:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.751 07:54:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.751 07:54:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:09.751 07:54:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:09.751 07:54:15 -- accel/accel.sh@41 -- # local IFS=, 00:08:09.751 07:54:15 -- accel/accel.sh@42 -- # jq -r . 00:08:09.751 -x option must be non-negative. 00:08:09.751 [2024-07-13 07:54:15.546153] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:10.010 accel_perf options: 00:08:10.010 [-h help message] 00:08:10.010 [-q queue depth per core] 00:08:10.010 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:10.010 [-T number of threads per core 00:08:10.010 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:10.010 [-t time in seconds] 00:08:10.010 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:10.010 [ dif_verify, , dif_generate, dif_generate_copy 00:08:10.010 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:10.010 [-l for compress/decompress workloads, name of uncompressed input file 00:08:10.010 [-S for crc32c workload, use this seed value (default 0) 00:08:10.010 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:10.010 [-f for fill workload, use this BYTE value (default 255) 00:08:10.010 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:10.010 [-y verify result if this switch is on] 00:08:10.010 [-a tasks to allocate per core (default: same value as -q)] 00:08:10.010 Can be used to spread operations across a wider range of memory. 00:08:10.010 ************************************ 00:08:10.010 END TEST accel_negative_buffers 00:08:10.010 ************************************ 00:08:10.010 07:54:15 -- common/autotest_common.sh@643 -- # es=1 00:08:10.010 07:54:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:10.010 07:54:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:10.010 07:54:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:10.010 00:08:10.010 real 0m0.152s 00:08:10.010 user 0m0.085s 00:08:10.010 sys 0m0.033s 00:08:10.010 07:54:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.010 07:54:15 -- common/autotest_common.sh@10 -- # set +x 00:08:10.010 07:54:15 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:10.010 07:54:15 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:10.010 07:54:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.010 07:54:15 -- common/autotest_common.sh@10 -- # set +x 00:08:10.010 ************************************ 00:08:10.010 START TEST accel_crc32c 00:08:10.010 ************************************ 00:08:10.010 07:54:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:10.010 07:54:15 -- accel/accel.sh@16 -- # local accel_opc 00:08:10.010 07:54:15 -- accel/accel.sh@17 -- # local accel_module 00:08:10.010 07:54:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:10.010 07:54:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:10.010 07:54:15 -- accel/accel.sh@12 -- # build_accel_config 00:08:10.010 07:54:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:10.011 07:54:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.011 07:54:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.011 07:54:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:10.011 07:54:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:10.011 07:54:15 -- accel/accel.sh@41 -- # local IFS=, 00:08:10.011 07:54:15 -- accel/accel.sh@42 -- # jq -r . 00:08:10.011 [2024-07-13 07:54:15.768830] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:10.011 [2024-07-13 07:54:15.769013] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54425 ] 00:08:10.270 [2024-07-13 07:54:15.901662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.270 [2024-07-13 07:54:15.954326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.648 07:54:17 -- accel/accel.sh@18 -- # out=' 00:08:11.648 SPDK Configuration: 00:08:11.648 Core mask: 0x1 00:08:11.648 00:08:11.648 Accel Perf Configuration: 00:08:11.648 Workload Type: crc32c 00:08:11.648 CRC-32C seed: 32 00:08:11.648 Transfer size: 4096 bytes 00:08:11.648 Vector count 1 00:08:11.648 Module: software 00:08:11.648 Queue depth: 32 00:08:11.648 Allocate depth: 32 00:08:11.648 # threads/core: 1 00:08:11.648 Run time: 1 seconds 00:08:11.648 Verify: Yes 00:08:11.648 00:08:11.648 Running for 1 seconds... 00:08:11.648 00:08:11.648 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:11.648 ------------------------------------------------------------------------------------ 00:08:11.648 0,0 127296/s 497 MiB/s 0 0 00:08:11.648 ==================================================================================== 00:08:11.648 Total 127296/s 497 MiB/s 0 0' 00:08:11.648 07:54:17 -- accel/accel.sh@20 -- # IFS=: 00:08:11.648 07:54:17 -- accel/accel.sh@20 -- # read -r var val 00:08:11.648 07:54:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:11.648 07:54:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:11.648 07:54:17 -- accel/accel.sh@12 -- # build_accel_config 00:08:11.648 07:54:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:11.649 07:54:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.649 07:54:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.649 07:54:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:11.649 07:54:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:11.649 07:54:17 -- accel/accel.sh@41 -- # local IFS=, 00:08:11.649 07:54:17 -- accel/accel.sh@42 -- # jq -r . 00:08:11.649 [2024-07-13 07:54:17.275178] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:11.649 [2024-07-13 07:54:17.275350] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54459 ] 00:08:11.649 [2024-07-13 07:54:17.404973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.649 [2024-07-13 07:54:17.459795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.908 07:54:17 -- accel/accel.sh@21 -- # val= 00:08:11.908 07:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # IFS=: 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # read -r var val 00:08:11.908 07:54:17 -- accel/accel.sh@21 -- # val= 00:08:11.908 07:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # IFS=: 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # read -r var val 00:08:11.908 07:54:17 -- accel/accel.sh@21 -- # val=0x1 00:08:11.908 07:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # IFS=: 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # read -r var val 00:08:11.908 07:54:17 -- accel/accel.sh@21 -- # val= 00:08:11.908 07:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # IFS=: 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # read -r var val 00:08:11.908 07:54:17 -- accel/accel.sh@21 -- # val= 00:08:11.908 07:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # IFS=: 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # read -r var val 00:08:11.908 07:54:17 -- accel/accel.sh@21 -- # val=crc32c 00:08:11.908 07:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.908 07:54:17 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # IFS=: 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # read -r var val 00:08:11.908 07:54:17 -- accel/accel.sh@21 -- # val=32 00:08:11.908 07:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # IFS=: 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # read -r var val 00:08:11.908 07:54:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:11.908 07:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # IFS=: 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # read -r var val 00:08:11.908 07:54:17 -- accel/accel.sh@21 -- # val= 00:08:11.908 07:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # IFS=: 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # read -r var val 00:08:11.908 07:54:17 -- accel/accel.sh@21 -- # val=software 00:08:11.908 07:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.908 07:54:17 -- accel/accel.sh@23 -- # accel_module=software 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # IFS=: 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # read -r var val 00:08:11.908 07:54:17 -- accel/accel.sh@21 -- # val=32 00:08:11.908 07:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # IFS=: 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # read -r var val 00:08:11.908 07:54:17 -- accel/accel.sh@21 -- # val=32 00:08:11.908 07:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # IFS=: 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # read -r var val 00:08:11.908 07:54:17 -- accel/accel.sh@21 -- # val=1 00:08:11.908 07:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # IFS=: 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # read -r var val 00:08:11.908 07:54:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:11.908 07:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # IFS=: 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # read -r var val 00:08:11.908 07:54:17 -- accel/accel.sh@21 -- # val=Yes 00:08:11.908 07:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # IFS=: 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # read -r var val 00:08:11.908 07:54:17 -- accel/accel.sh@21 -- # val= 00:08:11.908 07:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # IFS=: 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # read -r var val 00:08:11.908 07:54:17 -- accel/accel.sh@21 -- # val= 00:08:11.908 07:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # IFS=: 00:08:11.908 07:54:17 -- accel/accel.sh@20 -- # read -r var val 00:08:12.845 07:54:18 -- accel/accel.sh@21 -- # val= 00:08:12.845 07:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.845 07:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:12.845 07:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:12.845 07:54:18 -- accel/accel.sh@21 -- # val= 00:08:12.845 07:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.845 07:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:12.845 07:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:12.845 07:54:18 -- accel/accel.sh@21 -- # val= 00:08:12.845 07:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.845 07:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:12.845 07:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:12.845 07:54:18 -- accel/accel.sh@21 -- # val= 00:08:12.845 07:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.845 07:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:12.845 07:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:12.845 07:54:18 -- accel/accel.sh@21 -- # val= 00:08:12.845 07:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.845 07:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:12.845 07:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:12.845 07:54:18 -- accel/accel.sh@21 -- # val= 00:08:12.845 07:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.845 07:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:12.845 07:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:13.104 ************************************ 00:08:13.104 END TEST accel_crc32c 00:08:13.104 ************************************ 00:08:13.104 07:54:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:13.104 07:54:18 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:08:13.104 07:54:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.104 00:08:13.104 real 0m3.043s 00:08:13.104 user 0m2.456s 00:08:13.104 sys 0m0.264s 00:08:13.104 07:54:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.104 07:54:18 -- common/autotest_common.sh@10 -- # set +x 00:08:13.104 07:54:18 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:13.104 07:54:18 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:13.104 07:54:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:13.104 07:54:18 -- common/autotest_common.sh@10 -- # set +x 00:08:13.104 ************************************ 00:08:13.104 START TEST accel_crc32c_C2 00:08:13.104 ************************************ 00:08:13.104 07:54:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:13.104 07:54:18 -- accel/accel.sh@16 -- # local accel_opc 00:08:13.104 07:54:18 -- accel/accel.sh@17 -- # local accel_module 00:08:13.104 07:54:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:13.104 07:54:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:13.104 07:54:18 -- accel/accel.sh@12 -- # build_accel_config 00:08:13.104 07:54:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:13.104 07:54:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.104 07:54:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.104 07:54:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:13.104 07:54:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:13.104 07:54:18 -- accel/accel.sh@41 -- # local IFS=, 00:08:13.104 07:54:18 -- accel/accel.sh@42 -- # jq -r . 00:08:13.104 [2024-07-13 07:54:18.846289] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:13.104 [2024-07-13 07:54:18.846525] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54494 ] 00:08:13.363 [2024-07-13 07:54:18.985015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.363 [2024-07-13 07:54:19.040201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.735 07:54:20 -- accel/accel.sh@18 -- # out=' 00:08:14.735 SPDK Configuration: 00:08:14.735 Core mask: 0x1 00:08:14.735 00:08:14.735 Accel Perf Configuration: 00:08:14.735 Workload Type: crc32c 00:08:14.735 CRC-32C seed: 0 00:08:14.735 Transfer size: 4096 bytes 00:08:14.735 Vector count 2 00:08:14.735 Module: software 00:08:14.735 Queue depth: 32 00:08:14.735 Allocate depth: 32 00:08:14.735 # threads/core: 1 00:08:14.735 Run time: 1 seconds 00:08:14.735 Verify: Yes 00:08:14.735 00:08:14.735 Running for 1 seconds... 00:08:14.735 00:08:14.735 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:14.735 ------------------------------------------------------------------------------------ 00:08:14.735 0,0 60832/s 475 MiB/s 0 0 00:08:14.735 ==================================================================================== 00:08:14.735 Total 60832/s 237 MiB/s 0 0' 00:08:14.735 07:54:20 -- accel/accel.sh@20 -- # IFS=: 00:08:14.735 07:54:20 -- accel/accel.sh@20 -- # read -r var val 00:08:14.735 07:54:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:14.735 07:54:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:14.735 07:54:20 -- accel/accel.sh@12 -- # build_accel_config 00:08:14.735 07:54:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:14.735 07:54:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.735 07:54:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.735 07:54:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:14.735 07:54:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:14.735 07:54:20 -- accel/accel.sh@41 -- # local IFS=, 00:08:14.735 07:54:20 -- accel/accel.sh@42 -- # jq -r . 00:08:14.735 [2024-07-13 07:54:20.361532] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:14.735 [2024-07-13 07:54:20.361740] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54528 ] 00:08:14.735 [2024-07-13 07:54:20.496337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.993 [2024-07-13 07:54:20.550804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.993 07:54:20 -- accel/accel.sh@21 -- # val= 00:08:14.993 07:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.993 07:54:20 -- accel/accel.sh@20 -- # IFS=: 00:08:14.993 07:54:20 -- accel/accel.sh@20 -- # read -r var val 00:08:14.993 07:54:20 -- accel/accel.sh@21 -- # val= 00:08:14.993 07:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.993 07:54:20 -- accel/accel.sh@20 -- # IFS=: 00:08:14.993 07:54:20 -- accel/accel.sh@20 -- # read -r var val 00:08:14.993 07:54:20 -- accel/accel.sh@21 -- # val=0x1 00:08:14.993 07:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.993 07:54:20 -- accel/accel.sh@20 -- # IFS=: 00:08:14.993 07:54:20 -- accel/accel.sh@20 -- # read -r var val 00:08:14.993 07:54:20 -- accel/accel.sh@21 -- # val= 00:08:14.993 07:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.993 07:54:20 -- accel/accel.sh@20 -- # IFS=: 00:08:14.993 07:54:20 -- accel/accel.sh@20 -- # read -r var val 00:08:14.993 07:54:20 -- accel/accel.sh@21 -- # val= 00:08:14.993 07:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.993 07:54:20 -- accel/accel.sh@20 -- # IFS=: 00:08:14.993 07:54:20 -- accel/accel.sh@20 -- # read -r var val 00:08:14.993 07:54:20 -- accel/accel.sh@21 -- # val=crc32c 00:08:14.993 07:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.994 07:54:20 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # IFS=: 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # read -r var val 00:08:14.994 07:54:20 -- accel/accel.sh@21 -- # val=0 00:08:14.994 07:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # IFS=: 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # read -r var val 00:08:14.994 07:54:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:14.994 07:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # IFS=: 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # read -r var val 00:08:14.994 07:54:20 -- accel/accel.sh@21 -- # val= 00:08:14.994 07:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # IFS=: 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # read -r var val 00:08:14.994 07:54:20 -- accel/accel.sh@21 -- # val=software 00:08:14.994 07:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.994 07:54:20 -- accel/accel.sh@23 -- # accel_module=software 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # IFS=: 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # read -r var val 00:08:14.994 07:54:20 -- accel/accel.sh@21 -- # val=32 00:08:14.994 07:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # IFS=: 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # read -r var val 00:08:14.994 07:54:20 -- accel/accel.sh@21 -- # val=32 00:08:14.994 07:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # IFS=: 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # read -r var val 00:08:14.994 07:54:20 -- accel/accel.sh@21 -- # val=1 00:08:14.994 07:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # IFS=: 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # read -r var val 00:08:14.994 07:54:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:14.994 07:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # IFS=: 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # read -r var val 00:08:14.994 07:54:20 -- accel/accel.sh@21 -- # val=Yes 00:08:14.994 07:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # IFS=: 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # read -r var val 00:08:14.994 07:54:20 -- accel/accel.sh@21 -- # val= 00:08:14.994 07:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # IFS=: 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # read -r var val 00:08:14.994 07:54:20 -- accel/accel.sh@21 -- # val= 00:08:14.994 07:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # IFS=: 00:08:14.994 07:54:20 -- accel/accel.sh@20 -- # read -r var val 00:08:16.366 07:54:21 -- accel/accel.sh@21 -- # val= 00:08:16.366 07:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.366 07:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:16.366 07:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:16.366 07:54:21 -- accel/accel.sh@21 -- # val= 00:08:16.366 07:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.366 07:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:16.366 07:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:16.366 07:54:21 -- accel/accel.sh@21 -- # val= 00:08:16.366 07:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.366 07:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:16.366 07:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:16.366 07:54:21 -- accel/accel.sh@21 -- # val= 00:08:16.366 07:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.366 07:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:16.366 07:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:16.366 07:54:21 -- accel/accel.sh@21 -- # val= 00:08:16.366 07:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.366 07:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:16.366 07:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:16.366 07:54:21 -- accel/accel.sh@21 -- # val= 00:08:16.366 07:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.366 07:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:16.366 07:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:16.366 ************************************ 00:08:16.366 END TEST accel_crc32c_C2 00:08:16.366 ************************************ 00:08:16.366 07:54:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:16.366 07:54:21 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:08:16.366 07:54:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.366 00:08:16.366 real 0m3.032s 00:08:16.366 user 0m2.461s 00:08:16.366 sys 0m0.278s 00:08:16.366 07:54:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.366 07:54:21 -- common/autotest_common.sh@10 -- # set +x 00:08:16.366 07:54:21 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:16.366 07:54:21 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:16.366 07:54:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:16.366 07:54:21 -- common/autotest_common.sh@10 -- # set +x 00:08:16.366 ************************************ 00:08:16.366 START TEST accel_copy 00:08:16.366 ************************************ 00:08:16.366 07:54:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:08:16.366 07:54:21 -- accel/accel.sh@16 -- # local accel_opc 00:08:16.366 07:54:21 -- accel/accel.sh@17 -- # local accel_module 00:08:16.366 07:54:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:08:16.366 07:54:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:16.366 07:54:21 -- accel/accel.sh@12 -- # build_accel_config 00:08:16.366 07:54:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:16.366 07:54:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.366 07:54:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.366 07:54:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:16.366 07:54:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:16.366 07:54:21 -- accel/accel.sh@41 -- # local IFS=, 00:08:16.366 07:54:21 -- accel/accel.sh@42 -- # jq -r . 00:08:16.366 [2024-07-13 07:54:21.938089] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:16.366 [2024-07-13 07:54:21.938311] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54563 ] 00:08:16.366 [2024-07-13 07:54:22.076534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.366 [2024-07-13 07:54:22.135549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.737 07:54:23 -- accel/accel.sh@18 -- # out=' 00:08:17.737 SPDK Configuration: 00:08:17.737 Core mask: 0x1 00:08:17.737 00:08:17.737 Accel Perf Configuration: 00:08:17.737 Workload Type: copy 00:08:17.737 Transfer size: 4096 bytes 00:08:17.737 Vector count 1 00:08:17.737 Module: software 00:08:17.737 Queue depth: 32 00:08:17.737 Allocate depth: 32 00:08:17.737 # threads/core: 1 00:08:17.737 Run time: 1 seconds 00:08:17.737 Verify: Yes 00:08:17.737 00:08:17.737 Running for 1 seconds... 00:08:17.737 00:08:17.737 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:17.737 ------------------------------------------------------------------------------------ 00:08:17.737 0,0 928288/s 3626 MiB/s 0 0 00:08:17.737 ==================================================================================== 00:08:17.737 Total 928288/s 3626 MiB/s 0 0' 00:08:17.737 07:54:23 -- accel/accel.sh@20 -- # IFS=: 00:08:17.737 07:54:23 -- accel/accel.sh@20 -- # read -r var val 00:08:17.737 07:54:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:17.737 07:54:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:17.737 07:54:23 -- accel/accel.sh@12 -- # build_accel_config 00:08:17.737 07:54:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:17.737 07:54:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.737 07:54:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.737 07:54:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:17.737 07:54:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:17.737 07:54:23 -- accel/accel.sh@41 -- # local IFS=, 00:08:17.737 07:54:23 -- accel/accel.sh@42 -- # jq -r . 00:08:17.737 [2024-07-13 07:54:23.458251] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:17.737 [2024-07-13 07:54:23.458492] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54599 ] 00:08:17.995 [2024-07-13 07:54:23.594890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.995 [2024-07-13 07:54:23.650374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.995 07:54:23 -- accel/accel.sh@21 -- # val= 00:08:17.995 07:54:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.995 07:54:23 -- accel/accel.sh@20 -- # IFS=: 00:08:17.995 07:54:23 -- accel/accel.sh@20 -- # read -r var val 00:08:17.995 07:54:23 -- accel/accel.sh@21 -- # val= 00:08:17.995 07:54:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.995 07:54:23 -- accel/accel.sh@20 -- # IFS=: 00:08:17.995 07:54:23 -- accel/accel.sh@20 -- # read -r var val 00:08:17.995 07:54:23 -- accel/accel.sh@21 -- # val=0x1 00:08:17.995 07:54:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.995 07:54:23 -- accel/accel.sh@20 -- # IFS=: 00:08:17.995 07:54:23 -- accel/accel.sh@20 -- # read -r var val 00:08:17.995 07:54:23 -- accel/accel.sh@21 -- # val= 00:08:17.996 07:54:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # IFS=: 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # read -r var val 00:08:17.996 07:54:23 -- accel/accel.sh@21 -- # val= 00:08:17.996 07:54:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # IFS=: 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # read -r var val 00:08:17.996 07:54:23 -- accel/accel.sh@21 -- # val=copy 00:08:17.996 07:54:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.996 07:54:23 -- accel/accel.sh@24 -- # accel_opc=copy 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # IFS=: 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # read -r var val 00:08:17.996 07:54:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:17.996 07:54:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # IFS=: 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # read -r var val 00:08:17.996 07:54:23 -- accel/accel.sh@21 -- # val= 00:08:17.996 07:54:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # IFS=: 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # read -r var val 00:08:17.996 07:54:23 -- accel/accel.sh@21 -- # val=software 00:08:17.996 07:54:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.996 07:54:23 -- accel/accel.sh@23 -- # accel_module=software 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # IFS=: 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # read -r var val 00:08:17.996 07:54:23 -- accel/accel.sh@21 -- # val=32 00:08:17.996 07:54:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # IFS=: 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # read -r var val 00:08:17.996 07:54:23 -- accel/accel.sh@21 -- # val=32 00:08:17.996 07:54:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # IFS=: 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # read -r var val 00:08:17.996 07:54:23 -- accel/accel.sh@21 -- # val=1 00:08:17.996 07:54:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # IFS=: 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # read -r var val 00:08:17.996 07:54:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:17.996 07:54:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # IFS=: 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # read -r var val 00:08:17.996 07:54:23 -- accel/accel.sh@21 -- # val=Yes 00:08:17.996 07:54:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # IFS=: 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # read -r var val 00:08:17.996 07:54:23 -- accel/accel.sh@21 -- # val= 00:08:17.996 07:54:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # IFS=: 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # read -r var val 00:08:17.996 07:54:23 -- accel/accel.sh@21 -- # val= 00:08:17.996 07:54:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # IFS=: 00:08:17.996 07:54:23 -- accel/accel.sh@20 -- # read -r var val 00:08:19.373 07:54:24 -- accel/accel.sh@21 -- # val= 00:08:19.373 07:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.373 07:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:19.373 07:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:19.373 07:54:24 -- accel/accel.sh@21 -- # val= 00:08:19.373 07:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.373 07:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:19.373 07:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:19.373 07:54:24 -- accel/accel.sh@21 -- # val= 00:08:19.373 07:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.373 07:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:19.373 07:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:19.373 07:54:24 -- accel/accel.sh@21 -- # val= 00:08:19.373 07:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.373 07:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:19.373 07:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:19.373 07:54:24 -- accel/accel.sh@21 -- # val= 00:08:19.373 07:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.373 07:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:19.373 07:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:19.373 07:54:24 -- accel/accel.sh@21 -- # val= 00:08:19.373 07:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.373 07:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:19.373 07:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:19.373 ************************************ 00:08:19.373 END TEST accel_copy 00:08:19.373 ************************************ 00:08:19.373 07:54:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:19.373 07:54:24 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:08:19.373 07:54:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.373 00:08:19.373 real 0m3.040s 00:08:19.373 user 0m2.479s 00:08:19.373 sys 0m0.266s 00:08:19.373 07:54:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.373 07:54:24 -- common/autotest_common.sh@10 -- # set +x 00:08:19.373 07:54:24 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:19.373 07:54:24 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:19.373 07:54:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:19.373 07:54:24 -- common/autotest_common.sh@10 -- # set +x 00:08:19.373 ************************************ 00:08:19.373 START TEST accel_fill 00:08:19.373 ************************************ 00:08:19.373 07:54:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:19.373 07:54:24 -- accel/accel.sh@16 -- # local accel_opc 00:08:19.373 07:54:24 -- accel/accel.sh@17 -- # local accel_module 00:08:19.373 07:54:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:19.373 07:54:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:19.373 07:54:24 -- accel/accel.sh@12 -- # build_accel_config 00:08:19.373 07:54:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:19.373 07:54:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.373 07:54:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.373 07:54:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:19.373 07:54:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:19.373 07:54:24 -- accel/accel.sh@41 -- # local IFS=, 00:08:19.373 07:54:24 -- accel/accel.sh@42 -- # jq -r . 00:08:19.373 [2024-07-13 07:54:25.026037] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:19.373 [2024-07-13 07:54:25.026248] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54639 ] 00:08:19.373 [2024-07-13 07:54:25.162148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.631 [2024-07-13 07:54:25.218035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.004 07:54:26 -- accel/accel.sh@18 -- # out=' 00:08:21.004 SPDK Configuration: 00:08:21.004 Core mask: 0x1 00:08:21.004 00:08:21.004 Accel Perf Configuration: 00:08:21.004 Workload Type: fill 00:08:21.004 Fill pattern: 0x80 00:08:21.004 Transfer size: 4096 bytes 00:08:21.004 Vector count 1 00:08:21.004 Module: software 00:08:21.004 Queue depth: 64 00:08:21.004 Allocate depth: 64 00:08:21.004 # threads/core: 1 00:08:21.004 Run time: 1 seconds 00:08:21.004 Verify: Yes 00:08:21.004 00:08:21.004 Running for 1 seconds... 00:08:21.004 00:08:21.004 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:21.004 ------------------------------------------------------------------------------------ 00:08:21.004 0,0 1385856/s 5413 MiB/s 0 0 00:08:21.004 ==================================================================================== 00:08:21.004 Total 1385856/s 5413 MiB/s 0 0' 00:08:21.004 07:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:21.004 07:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:21.004 07:54:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:21.004 07:54:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:21.004 07:54:26 -- accel/accel.sh@12 -- # build_accel_config 00:08:21.004 07:54:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:21.004 07:54:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.004 07:54:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.004 07:54:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:21.004 07:54:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:21.004 07:54:26 -- accel/accel.sh@41 -- # local IFS=, 00:08:21.004 07:54:26 -- accel/accel.sh@42 -- # jq -r . 00:08:21.004 [2024-07-13 07:54:26.549928] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:21.004 [2024-07-13 07:54:26.550134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54667 ] 00:08:21.004 [2024-07-13 07:54:26.682908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.004 [2024-07-13 07:54:26.738337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.004 07:54:26 -- accel/accel.sh@21 -- # val= 00:08:21.004 07:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.004 07:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:21.004 07:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:21.004 07:54:26 -- accel/accel.sh@21 -- # val= 00:08:21.004 07:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.004 07:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:21.004 07:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:21.004 07:54:26 -- accel/accel.sh@21 -- # val=0x1 00:08:21.004 07:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:21.005 07:54:26 -- accel/accel.sh@21 -- # val= 00:08:21.005 07:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:21.005 07:54:26 -- accel/accel.sh@21 -- # val= 00:08:21.005 07:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:21.005 07:54:26 -- accel/accel.sh@21 -- # val=fill 00:08:21.005 07:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.005 07:54:26 -- accel/accel.sh@24 -- # accel_opc=fill 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:21.005 07:54:26 -- accel/accel.sh@21 -- # val=0x80 00:08:21.005 07:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:21.005 07:54:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:21.005 07:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:21.005 07:54:26 -- accel/accel.sh@21 -- # val= 00:08:21.005 07:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:21.005 07:54:26 -- accel/accel.sh@21 -- # val=software 00:08:21.005 07:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.005 07:54:26 -- accel/accel.sh@23 -- # accel_module=software 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:21.005 07:54:26 -- accel/accel.sh@21 -- # val=64 00:08:21.005 07:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:21.005 07:54:26 -- accel/accel.sh@21 -- # val=64 00:08:21.005 07:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:21.005 07:54:26 -- accel/accel.sh@21 -- # val=1 00:08:21.005 07:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:21.005 07:54:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:21.005 07:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:21.005 07:54:26 -- accel/accel.sh@21 -- # val=Yes 00:08:21.005 07:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:21.005 07:54:26 -- accel/accel.sh@21 -- # val= 00:08:21.005 07:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:21.005 07:54:26 -- accel/accel.sh@21 -- # val= 00:08:21.005 07:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:21.005 07:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:22.483 07:54:27 -- accel/accel.sh@21 -- # val= 00:08:22.483 07:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.483 07:54:27 -- accel/accel.sh@20 -- # IFS=: 00:08:22.483 07:54:27 -- accel/accel.sh@20 -- # read -r var val 00:08:22.483 07:54:27 -- accel/accel.sh@21 -- # val= 00:08:22.483 07:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.483 07:54:27 -- accel/accel.sh@20 -- # IFS=: 00:08:22.483 07:54:27 -- accel/accel.sh@20 -- # read -r var val 00:08:22.483 07:54:27 -- accel/accel.sh@21 -- # val= 00:08:22.483 07:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.483 07:54:27 -- accel/accel.sh@20 -- # IFS=: 00:08:22.483 07:54:27 -- accel/accel.sh@20 -- # read -r var val 00:08:22.483 07:54:27 -- accel/accel.sh@21 -- # val= 00:08:22.483 07:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.483 07:54:27 -- accel/accel.sh@20 -- # IFS=: 00:08:22.483 07:54:27 -- accel/accel.sh@20 -- # read -r var val 00:08:22.483 07:54:27 -- accel/accel.sh@21 -- # val= 00:08:22.483 07:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.483 07:54:27 -- accel/accel.sh@20 -- # IFS=: 00:08:22.483 07:54:27 -- accel/accel.sh@20 -- # read -r var val 00:08:22.483 07:54:27 -- accel/accel.sh@21 -- # val= 00:08:22.483 07:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.483 07:54:27 -- accel/accel.sh@20 -- # IFS=: 00:08:22.483 07:54:27 -- accel/accel.sh@20 -- # read -r var val 00:08:22.483 ************************************ 00:08:22.483 END TEST accel_fill 00:08:22.483 ************************************ 00:08:22.483 07:54:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:22.483 07:54:27 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:08:22.483 07:54:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:22.483 00:08:22.483 real 0m3.039s 00:08:22.483 user 0m2.467s 00:08:22.483 sys 0m0.282s 00:08:22.483 07:54:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.483 07:54:27 -- common/autotest_common.sh@10 -- # set +x 00:08:22.483 07:54:27 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:22.483 07:54:27 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:22.483 07:54:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:22.483 07:54:27 -- common/autotest_common.sh@10 -- # set +x 00:08:22.483 ************************************ 00:08:22.483 START TEST accel_copy_crc32c 00:08:22.483 ************************************ 00:08:22.483 07:54:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:08:22.483 07:54:27 -- accel/accel.sh@16 -- # local accel_opc 00:08:22.483 07:54:27 -- accel/accel.sh@17 -- # local accel_module 00:08:22.483 07:54:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:22.483 07:54:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:22.483 07:54:27 -- accel/accel.sh@12 -- # build_accel_config 00:08:22.483 07:54:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:22.483 07:54:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:22.483 07:54:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:22.483 07:54:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:22.483 07:54:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:22.483 07:54:27 -- accel/accel.sh@41 -- # local IFS=, 00:08:22.483 07:54:27 -- accel/accel.sh@42 -- # jq -r . 00:08:22.483 [2024-07-13 07:54:28.112893] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:22.483 [2024-07-13 07:54:28.113076] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54710 ] 00:08:22.483 [2024-07-13 07:54:28.248296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.742 [2024-07-13 07:54:28.304895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.120 07:54:29 -- accel/accel.sh@18 -- # out=' 00:08:24.120 SPDK Configuration: 00:08:24.120 Core mask: 0x1 00:08:24.120 00:08:24.120 Accel Perf Configuration: 00:08:24.120 Workload Type: copy_crc32c 00:08:24.120 CRC-32C seed: 0 00:08:24.120 Vector size: 4096 bytes 00:08:24.120 Transfer size: 4096 bytes 00:08:24.120 Vector count 1 00:08:24.120 Module: software 00:08:24.120 Queue depth: 32 00:08:24.120 Allocate depth: 32 00:08:24.120 # threads/core: 1 00:08:24.120 Run time: 1 seconds 00:08:24.120 Verify: Yes 00:08:24.120 00:08:24.120 Running for 1 seconds... 00:08:24.120 00:08:24.120 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:24.120 ------------------------------------------------------------------------------------ 00:08:24.120 0,0 106848/s 417 MiB/s 0 0 00:08:24.120 ==================================================================================== 00:08:24.120 Total 106848/s 417 MiB/s 0 0' 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:24.120 07:54:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:24.120 07:54:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:24.120 07:54:29 -- accel/accel.sh@12 -- # build_accel_config 00:08:24.120 07:54:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:24.120 07:54:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:24.120 07:54:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:24.120 07:54:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:24.120 07:54:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:24.120 07:54:29 -- accel/accel.sh@41 -- # local IFS=, 00:08:24.120 07:54:29 -- accel/accel.sh@42 -- # jq -r . 00:08:24.120 [2024-07-13 07:54:29.638790] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:24.120 [2024-07-13 07:54:29.638982] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54732 ] 00:08:24.120 [2024-07-13 07:54:29.776599] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.120 [2024-07-13 07:54:29.832240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.120 07:54:29 -- accel/accel.sh@21 -- # val= 00:08:24.120 07:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:24.120 07:54:29 -- accel/accel.sh@21 -- # val= 00:08:24.120 07:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:24.120 07:54:29 -- accel/accel.sh@21 -- # val=0x1 00:08:24.120 07:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:24.120 07:54:29 -- accel/accel.sh@21 -- # val= 00:08:24.120 07:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:24.120 07:54:29 -- accel/accel.sh@21 -- # val= 00:08:24.120 07:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:24.120 07:54:29 -- accel/accel.sh@21 -- # val=copy_crc32c 00:08:24.120 07:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.120 07:54:29 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:24.120 07:54:29 -- accel/accel.sh@21 -- # val=0 00:08:24.120 07:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:24.120 07:54:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:24.120 07:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:24.120 07:54:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:24.120 07:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:24.120 07:54:29 -- accel/accel.sh@21 -- # val= 00:08:24.120 07:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:24.120 07:54:29 -- accel/accel.sh@21 -- # val=software 00:08:24.120 07:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.120 07:54:29 -- accel/accel.sh@23 -- # accel_module=software 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:24.120 07:54:29 -- accel/accel.sh@21 -- # val=32 00:08:24.120 07:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:24.120 07:54:29 -- accel/accel.sh@21 -- # val=32 00:08:24.120 07:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:24.120 07:54:29 -- accel/accel.sh@21 -- # val=1 00:08:24.120 07:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:24.120 07:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:24.120 07:54:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:24.120 07:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.121 07:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:24.121 07:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:24.121 07:54:29 -- accel/accel.sh@21 -- # val=Yes 00:08:24.121 07:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.121 07:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:24.121 07:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:24.121 07:54:29 -- accel/accel.sh@21 -- # val= 00:08:24.121 07:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.121 07:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:24.121 07:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:24.121 07:54:29 -- accel/accel.sh@21 -- # val= 00:08:24.121 07:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.121 07:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:24.121 07:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:25.496 07:54:31 -- accel/accel.sh@21 -- # val= 00:08:25.496 07:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.496 07:54:31 -- accel/accel.sh@20 -- # IFS=: 00:08:25.497 07:54:31 -- accel/accel.sh@20 -- # read -r var val 00:08:25.497 07:54:31 -- accel/accel.sh@21 -- # val= 00:08:25.497 07:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.497 07:54:31 -- accel/accel.sh@20 -- # IFS=: 00:08:25.497 07:54:31 -- accel/accel.sh@20 -- # read -r var val 00:08:25.497 07:54:31 -- accel/accel.sh@21 -- # val= 00:08:25.497 07:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.497 07:54:31 -- accel/accel.sh@20 -- # IFS=: 00:08:25.497 07:54:31 -- accel/accel.sh@20 -- # read -r var val 00:08:25.497 07:54:31 -- accel/accel.sh@21 -- # val= 00:08:25.497 07:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.497 07:54:31 -- accel/accel.sh@20 -- # IFS=: 00:08:25.497 07:54:31 -- accel/accel.sh@20 -- # read -r var val 00:08:25.497 07:54:31 -- accel/accel.sh@21 -- # val= 00:08:25.497 07:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.497 07:54:31 -- accel/accel.sh@20 -- # IFS=: 00:08:25.497 07:54:31 -- accel/accel.sh@20 -- # read -r var val 00:08:25.497 07:54:31 -- accel/accel.sh@21 -- # val= 00:08:25.497 07:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.497 07:54:31 -- accel/accel.sh@20 -- # IFS=: 00:08:25.497 07:54:31 -- accel/accel.sh@20 -- # read -r var val 00:08:25.497 ************************************ 00:08:25.497 END TEST accel_copy_crc32c 00:08:25.497 ************************************ 00:08:25.497 07:54:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:25.497 07:54:31 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:08:25.497 07:54:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:25.497 00:08:25.497 real 0m3.039s 00:08:25.497 user 0m2.475s 00:08:25.497 sys 0m0.276s 00:08:25.497 07:54:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.497 07:54:31 -- common/autotest_common.sh@10 -- # set +x 00:08:25.497 07:54:31 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:25.497 07:54:31 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:25.497 07:54:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.497 07:54:31 -- common/autotest_common.sh@10 -- # set +x 00:08:25.497 ************************************ 00:08:25.497 START TEST accel_copy_crc32c_C2 00:08:25.497 ************************************ 00:08:25.497 07:54:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:25.497 07:54:31 -- accel/accel.sh@16 -- # local accel_opc 00:08:25.497 07:54:31 -- accel/accel.sh@17 -- # local accel_module 00:08:25.497 07:54:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:25.497 07:54:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:25.497 07:54:31 -- accel/accel.sh@12 -- # build_accel_config 00:08:25.497 07:54:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:25.497 07:54:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:25.497 07:54:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:25.497 07:54:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:25.497 07:54:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:25.497 07:54:31 -- accel/accel.sh@41 -- # local IFS=, 00:08:25.497 07:54:31 -- accel/accel.sh@42 -- # jq -r . 00:08:25.497 [2024-07-13 07:54:31.211331] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:25.497 [2024-07-13 07:54:31.211546] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54779 ] 00:08:25.755 [2024-07-13 07:54:31.348436] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.755 [2024-07-13 07:54:31.403125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.140 07:54:32 -- accel/accel.sh@18 -- # out=' 00:08:27.140 SPDK Configuration: 00:08:27.140 Core mask: 0x1 00:08:27.140 00:08:27.140 Accel Perf Configuration: 00:08:27.140 Workload Type: copy_crc32c 00:08:27.140 CRC-32C seed: 0 00:08:27.140 Vector size: 4096 bytes 00:08:27.140 Transfer size: 8192 bytes 00:08:27.140 Vector count 2 00:08:27.140 Module: software 00:08:27.140 Queue depth: 32 00:08:27.140 Allocate depth: 32 00:08:27.140 # threads/core: 1 00:08:27.140 Run time: 1 seconds 00:08:27.140 Verify: Yes 00:08:27.140 00:08:27.140 Running for 1 seconds... 00:08:27.140 00:08:27.140 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:27.140 ------------------------------------------------------------------------------------ 00:08:27.140 0,0 55616/s 434 MiB/s 0 0 00:08:27.140 ==================================================================================== 00:08:27.140 Total 55616/s 217 MiB/s 0 0' 00:08:27.140 07:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:27.140 07:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:27.140 07:54:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:27.140 07:54:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:27.140 07:54:32 -- accel/accel.sh@12 -- # build_accel_config 00:08:27.140 07:54:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:27.140 07:54:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:27.140 07:54:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:27.140 07:54:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:27.140 07:54:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:27.140 07:54:32 -- accel/accel.sh@41 -- # local IFS=, 00:08:27.140 07:54:32 -- accel/accel.sh@42 -- # jq -r . 00:08:27.140 [2024-07-13 07:54:32.724208] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:27.140 [2024-07-13 07:54:32.724387] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54801 ] 00:08:27.140 [2024-07-13 07:54:32.855991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.140 [2024-07-13 07:54:32.910980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.403 07:54:32 -- accel/accel.sh@21 -- # val= 00:08:27.403 07:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:27.403 07:54:32 -- accel/accel.sh@21 -- # val= 00:08:27.403 07:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:27.403 07:54:32 -- accel/accel.sh@21 -- # val=0x1 00:08:27.403 07:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:27.403 07:54:32 -- accel/accel.sh@21 -- # val= 00:08:27.403 07:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:27.403 07:54:32 -- accel/accel.sh@21 -- # val= 00:08:27.403 07:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:27.403 07:54:32 -- accel/accel.sh@21 -- # val=copy_crc32c 00:08:27.403 07:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.403 07:54:32 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:27.403 07:54:32 -- accel/accel.sh@21 -- # val=0 00:08:27.403 07:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:27.403 07:54:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:27.403 07:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:27.403 07:54:32 -- accel/accel.sh@21 -- # val='8192 bytes' 00:08:27.403 07:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:27.403 07:54:32 -- accel/accel.sh@21 -- # val= 00:08:27.403 07:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:27.403 07:54:32 -- accel/accel.sh@21 -- # val=software 00:08:27.403 07:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.403 07:54:32 -- accel/accel.sh@23 -- # accel_module=software 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:27.403 07:54:32 -- accel/accel.sh@21 -- # val=32 00:08:27.403 07:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:27.403 07:54:32 -- accel/accel.sh@21 -- # val=32 00:08:27.403 07:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:27.403 07:54:32 -- accel/accel.sh@21 -- # val=1 00:08:27.403 07:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:27.403 07:54:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:27.403 07:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:27.403 07:54:32 -- accel/accel.sh@21 -- # val=Yes 00:08:27.403 07:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:27.403 07:54:32 -- accel/accel.sh@21 -- # val= 00:08:27.403 07:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:27.403 07:54:32 -- accel/accel.sh@21 -- # val= 00:08:27.403 07:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # IFS=: 00:08:27.403 07:54:32 -- accel/accel.sh@20 -- # read -r var val 00:08:28.338 07:54:34 -- accel/accel.sh@21 -- # val= 00:08:28.338 07:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.338 07:54:34 -- accel/accel.sh@20 -- # IFS=: 00:08:28.338 07:54:34 -- accel/accel.sh@20 -- # read -r var val 00:08:28.338 07:54:34 -- accel/accel.sh@21 -- # val= 00:08:28.338 07:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.338 07:54:34 -- accel/accel.sh@20 -- # IFS=: 00:08:28.338 07:54:34 -- accel/accel.sh@20 -- # read -r var val 00:08:28.338 07:54:34 -- accel/accel.sh@21 -- # val= 00:08:28.338 07:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.338 07:54:34 -- accel/accel.sh@20 -- # IFS=: 00:08:28.338 07:54:34 -- accel/accel.sh@20 -- # read -r var val 00:08:28.338 07:54:34 -- accel/accel.sh@21 -- # val= 00:08:28.338 07:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.338 07:54:34 -- accel/accel.sh@20 -- # IFS=: 00:08:28.338 07:54:34 -- accel/accel.sh@20 -- # read -r var val 00:08:28.338 07:54:34 -- accel/accel.sh@21 -- # val= 00:08:28.338 07:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.338 07:54:34 -- accel/accel.sh@20 -- # IFS=: 00:08:28.338 07:54:34 -- accel/accel.sh@20 -- # read -r var val 00:08:28.338 07:54:34 -- accel/accel.sh@21 -- # val= 00:08:28.338 07:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.338 07:54:34 -- accel/accel.sh@20 -- # IFS=: 00:08:28.338 07:54:34 -- accel/accel.sh@20 -- # read -r var val 00:08:28.338 ************************************ 00:08:28.338 END TEST accel_copy_crc32c_C2 00:08:28.338 ************************************ 00:08:28.338 07:54:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:28.338 07:54:34 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:08:28.338 07:54:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.338 00:08:28.338 real 0m3.026s 00:08:28.338 user 0m2.496s 00:08:28.338 sys 0m0.259s 00:08:28.338 07:54:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.338 07:54:34 -- common/autotest_common.sh@10 -- # set +x 00:08:28.338 07:54:34 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:28.338 07:54:34 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:28.338 07:54:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.338 07:54:34 -- common/autotest_common.sh@10 -- # set +x 00:08:28.597 ************************************ 00:08:28.597 START TEST accel_dualcast 00:08:28.597 ************************************ 00:08:28.597 07:54:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:08:28.597 07:54:34 -- accel/accel.sh@16 -- # local accel_opc 00:08:28.597 07:54:34 -- accel/accel.sh@17 -- # local accel_module 00:08:28.597 07:54:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:08:28.597 07:54:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:28.597 07:54:34 -- accel/accel.sh@12 -- # build_accel_config 00:08:28.597 07:54:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:28.597 07:54:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.597 07:54:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.597 07:54:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:28.597 07:54:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:28.597 07:54:34 -- accel/accel.sh@41 -- # local IFS=, 00:08:28.597 07:54:34 -- accel/accel.sh@42 -- # jq -r . 00:08:28.597 [2024-07-13 07:54:34.284633] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:28.597 [2024-07-13 07:54:34.284919] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54852 ] 00:08:28.855 [2024-07-13 07:54:34.432999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.855 [2024-07-13 07:54:34.494476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.229 07:54:35 -- accel/accel.sh@18 -- # out=' 00:08:30.229 SPDK Configuration: 00:08:30.229 Core mask: 0x1 00:08:30.229 00:08:30.229 Accel Perf Configuration: 00:08:30.229 Workload Type: dualcast 00:08:30.229 Transfer size: 4096 bytes 00:08:30.229 Vector count 1 00:08:30.229 Module: software 00:08:30.230 Queue depth: 32 00:08:30.230 Allocate depth: 32 00:08:30.230 # threads/core: 1 00:08:30.230 Run time: 1 seconds 00:08:30.230 Verify: Yes 00:08:30.230 00:08:30.230 Running for 1 seconds... 00:08:30.230 00:08:30.230 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:30.230 ------------------------------------------------------------------------------------ 00:08:30.230 0,0 683872/s 2671 MiB/s 0 0 00:08:30.230 ==================================================================================== 00:08:30.230 Total 683872/s 2671 MiB/s 0 0' 00:08:30.230 07:54:35 -- accel/accel.sh@20 -- # IFS=: 00:08:30.230 07:54:35 -- accel/accel.sh@20 -- # read -r var val 00:08:30.230 07:54:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:30.230 07:54:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:30.230 07:54:35 -- accel/accel.sh@12 -- # build_accel_config 00:08:30.230 07:54:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:30.230 07:54:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.230 07:54:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.230 07:54:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:30.230 07:54:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:30.230 07:54:35 -- accel/accel.sh@41 -- # local IFS=, 00:08:30.230 07:54:35 -- accel/accel.sh@42 -- # jq -r . 00:08:30.230 [2024-07-13 07:54:35.831638] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:30.230 [2024-07-13 07:54:35.831856] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54874 ] 00:08:30.230 [2024-07-13 07:54:35.984447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.487 [2024-07-13 07:54:36.045512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.487 07:54:36 -- accel/accel.sh@21 -- # val= 00:08:30.488 07:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # IFS=: 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # read -r var val 00:08:30.488 07:54:36 -- accel/accel.sh@21 -- # val= 00:08:30.488 07:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # IFS=: 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # read -r var val 00:08:30.488 07:54:36 -- accel/accel.sh@21 -- # val=0x1 00:08:30.488 07:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # IFS=: 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # read -r var val 00:08:30.488 07:54:36 -- accel/accel.sh@21 -- # val= 00:08:30.488 07:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # IFS=: 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # read -r var val 00:08:30.488 07:54:36 -- accel/accel.sh@21 -- # val= 00:08:30.488 07:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # IFS=: 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # read -r var val 00:08:30.488 07:54:36 -- accel/accel.sh@21 -- # val=dualcast 00:08:30.488 07:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.488 07:54:36 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # IFS=: 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # read -r var val 00:08:30.488 07:54:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:30.488 07:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # IFS=: 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # read -r var val 00:08:30.488 07:54:36 -- accel/accel.sh@21 -- # val= 00:08:30.488 07:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # IFS=: 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # read -r var val 00:08:30.488 07:54:36 -- accel/accel.sh@21 -- # val=software 00:08:30.488 07:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.488 07:54:36 -- accel/accel.sh@23 -- # accel_module=software 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # IFS=: 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # read -r var val 00:08:30.488 07:54:36 -- accel/accel.sh@21 -- # val=32 00:08:30.488 07:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # IFS=: 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # read -r var val 00:08:30.488 07:54:36 -- accel/accel.sh@21 -- # val=32 00:08:30.488 07:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # IFS=: 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # read -r var val 00:08:30.488 07:54:36 -- accel/accel.sh@21 -- # val=1 00:08:30.488 07:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # IFS=: 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # read -r var val 00:08:30.488 07:54:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:30.488 07:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # IFS=: 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # read -r var val 00:08:30.488 07:54:36 -- accel/accel.sh@21 -- # val=Yes 00:08:30.488 07:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # IFS=: 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # read -r var val 00:08:30.488 07:54:36 -- accel/accel.sh@21 -- # val= 00:08:30.488 07:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # IFS=: 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # read -r var val 00:08:30.488 07:54:36 -- accel/accel.sh@21 -- # val= 00:08:30.488 07:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # IFS=: 00:08:30.488 07:54:36 -- accel/accel.sh@20 -- # read -r var val 00:08:31.861 07:54:37 -- accel/accel.sh@21 -- # val= 00:08:31.862 07:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.862 07:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:31.862 07:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:31.862 07:54:37 -- accel/accel.sh@21 -- # val= 00:08:31.862 07:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.862 07:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:31.862 07:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:31.862 07:54:37 -- accel/accel.sh@21 -- # val= 00:08:31.862 07:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.862 07:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:31.862 07:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:31.862 07:54:37 -- accel/accel.sh@21 -- # val= 00:08:31.862 07:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.862 07:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:31.862 07:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:31.862 07:54:37 -- accel/accel.sh@21 -- # val= 00:08:31.862 07:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.862 07:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:31.862 07:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:31.862 07:54:37 -- accel/accel.sh@21 -- # val= 00:08:31.862 07:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.862 07:54:37 -- accel/accel.sh@20 -- # IFS=: 00:08:31.862 07:54:37 -- accel/accel.sh@20 -- # read -r var val 00:08:31.862 ************************************ 00:08:31.862 END TEST accel_dualcast 00:08:31.862 ************************************ 00:08:31.862 07:54:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:31.862 07:54:37 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:08:31.862 07:54:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:31.862 00:08:31.862 real 0m3.098s 00:08:31.862 user 0m2.489s 00:08:31.862 sys 0m0.307s 00:08:31.862 07:54:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.862 07:54:37 -- common/autotest_common.sh@10 -- # set +x 00:08:31.862 07:54:37 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:31.862 07:54:37 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:31.862 07:54:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:31.862 07:54:37 -- common/autotest_common.sh@10 -- # set +x 00:08:31.862 ************************************ 00:08:31.862 START TEST accel_compare 00:08:31.862 ************************************ 00:08:31.862 07:54:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:08:31.862 07:54:37 -- accel/accel.sh@16 -- # local accel_opc 00:08:31.862 07:54:37 -- accel/accel.sh@17 -- # local accel_module 00:08:31.862 07:54:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:08:31.862 07:54:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:31.862 07:54:37 -- accel/accel.sh@12 -- # build_accel_config 00:08:31.862 07:54:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:31.862 07:54:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.862 07:54:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.862 07:54:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:31.862 07:54:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:31.862 07:54:37 -- accel/accel.sh@41 -- # local IFS=, 00:08:31.862 07:54:37 -- accel/accel.sh@42 -- # jq -r . 00:08:31.862 [2024-07-13 07:54:37.437046] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:31.862 [2024-07-13 07:54:37.437264] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54921 ] 00:08:31.862 [2024-07-13 07:54:37.576007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.862 [2024-07-13 07:54:37.631392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.238 07:54:38 -- accel/accel.sh@18 -- # out=' 00:08:33.238 SPDK Configuration: 00:08:33.238 Core mask: 0x1 00:08:33.238 00:08:33.238 Accel Perf Configuration: 00:08:33.238 Workload Type: compare 00:08:33.238 Transfer size: 4096 bytes 00:08:33.238 Vector count 1 00:08:33.238 Module: software 00:08:33.238 Queue depth: 32 00:08:33.238 Allocate depth: 32 00:08:33.238 # threads/core: 1 00:08:33.238 Run time: 1 seconds 00:08:33.238 Verify: Yes 00:08:33.238 00:08:33.238 Running for 1 seconds... 00:08:33.238 00:08:33.238 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:33.238 ------------------------------------------------------------------------------------ 00:08:33.238 0,0 1513152/s 5910 MiB/s 0 0 00:08:33.238 ==================================================================================== 00:08:33.238 Total 1513152/s 5910 MiB/s 0 0' 00:08:33.238 07:54:38 -- accel/accel.sh@20 -- # IFS=: 00:08:33.238 07:54:38 -- accel/accel.sh@20 -- # read -r var val 00:08:33.238 07:54:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:33.238 07:54:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:33.238 07:54:38 -- accel/accel.sh@12 -- # build_accel_config 00:08:33.238 07:54:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:33.238 07:54:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.238 07:54:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.238 07:54:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:33.238 07:54:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:33.238 07:54:38 -- accel/accel.sh@41 -- # local IFS=, 00:08:33.238 07:54:38 -- accel/accel.sh@42 -- # jq -r . 00:08:33.238 [2024-07-13 07:54:38.954407] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:33.238 [2024-07-13 07:54:38.954622] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54943 ] 00:08:33.497 [2024-07-13 07:54:39.088336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.497 [2024-07-13 07:54:39.142623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.497 07:54:39 -- accel/accel.sh@21 -- # val= 00:08:33.497 07:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # IFS=: 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # read -r var val 00:08:33.497 07:54:39 -- accel/accel.sh@21 -- # val= 00:08:33.497 07:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # IFS=: 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # read -r var val 00:08:33.497 07:54:39 -- accel/accel.sh@21 -- # val=0x1 00:08:33.497 07:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # IFS=: 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # read -r var val 00:08:33.497 07:54:39 -- accel/accel.sh@21 -- # val= 00:08:33.497 07:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # IFS=: 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # read -r var val 00:08:33.497 07:54:39 -- accel/accel.sh@21 -- # val= 00:08:33.497 07:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # IFS=: 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # read -r var val 00:08:33.497 07:54:39 -- accel/accel.sh@21 -- # val=compare 00:08:33.497 07:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.497 07:54:39 -- accel/accel.sh@24 -- # accel_opc=compare 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # IFS=: 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # read -r var val 00:08:33.497 07:54:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:33.497 07:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # IFS=: 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # read -r var val 00:08:33.497 07:54:39 -- accel/accel.sh@21 -- # val= 00:08:33.497 07:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # IFS=: 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # read -r var val 00:08:33.497 07:54:39 -- accel/accel.sh@21 -- # val=software 00:08:33.497 07:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.497 07:54:39 -- accel/accel.sh@23 -- # accel_module=software 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # IFS=: 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # read -r var val 00:08:33.497 07:54:39 -- accel/accel.sh@21 -- # val=32 00:08:33.497 07:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # IFS=: 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # read -r var val 00:08:33.497 07:54:39 -- accel/accel.sh@21 -- # val=32 00:08:33.497 07:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # IFS=: 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # read -r var val 00:08:33.497 07:54:39 -- accel/accel.sh@21 -- # val=1 00:08:33.497 07:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # IFS=: 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # read -r var val 00:08:33.497 07:54:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:33.497 07:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # IFS=: 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # read -r var val 00:08:33.497 07:54:39 -- accel/accel.sh@21 -- # val=Yes 00:08:33.497 07:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # IFS=: 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # read -r var val 00:08:33.497 07:54:39 -- accel/accel.sh@21 -- # val= 00:08:33.497 07:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # IFS=: 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # read -r var val 00:08:33.497 07:54:39 -- accel/accel.sh@21 -- # val= 00:08:33.497 07:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # IFS=: 00:08:33.497 07:54:39 -- accel/accel.sh@20 -- # read -r var val 00:08:34.873 07:54:40 -- accel/accel.sh@21 -- # val= 00:08:34.873 07:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.873 07:54:40 -- accel/accel.sh@20 -- # IFS=: 00:08:34.873 07:54:40 -- accel/accel.sh@20 -- # read -r var val 00:08:34.873 07:54:40 -- accel/accel.sh@21 -- # val= 00:08:34.873 07:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.873 07:54:40 -- accel/accel.sh@20 -- # IFS=: 00:08:34.873 07:54:40 -- accel/accel.sh@20 -- # read -r var val 00:08:34.873 07:54:40 -- accel/accel.sh@21 -- # val= 00:08:34.873 07:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.873 07:54:40 -- accel/accel.sh@20 -- # IFS=: 00:08:34.873 07:54:40 -- accel/accel.sh@20 -- # read -r var val 00:08:34.873 07:54:40 -- accel/accel.sh@21 -- # val= 00:08:34.873 07:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.873 07:54:40 -- accel/accel.sh@20 -- # IFS=: 00:08:34.873 07:54:40 -- accel/accel.sh@20 -- # read -r var val 00:08:34.873 07:54:40 -- accel/accel.sh@21 -- # val= 00:08:34.873 07:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.873 07:54:40 -- accel/accel.sh@20 -- # IFS=: 00:08:34.873 07:54:40 -- accel/accel.sh@20 -- # read -r var val 00:08:34.873 07:54:40 -- accel/accel.sh@21 -- # val= 00:08:34.873 07:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.873 07:54:40 -- accel/accel.sh@20 -- # IFS=: 00:08:34.873 07:54:40 -- accel/accel.sh@20 -- # read -r var val 00:08:34.873 07:54:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:34.873 07:54:40 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:08:34.873 07:54:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:34.873 00:08:34.873 real 0m3.032s 00:08:34.873 user 0m2.472s 00:08:34.873 sys 0m0.267s 00:08:34.873 07:54:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.873 07:54:40 -- common/autotest_common.sh@10 -- # set +x 00:08:34.873 ************************************ 00:08:34.873 END TEST accel_compare 00:08:34.873 ************************************ 00:08:34.873 07:54:40 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:34.873 07:54:40 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:34.873 07:54:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:34.873 07:54:40 -- common/autotest_common.sh@10 -- # set +x 00:08:34.873 ************************************ 00:08:34.873 START TEST accel_xor 00:08:34.873 ************************************ 00:08:34.873 07:54:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:08:34.873 07:54:40 -- accel/accel.sh@16 -- # local accel_opc 00:08:34.873 07:54:40 -- accel/accel.sh@17 -- # local accel_module 00:08:34.873 07:54:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:08:34.873 07:54:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:34.873 07:54:40 -- accel/accel.sh@12 -- # build_accel_config 00:08:34.873 07:54:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:34.873 07:54:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:34.873 07:54:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:34.873 07:54:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:34.873 07:54:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:34.873 07:54:40 -- accel/accel.sh@41 -- # local IFS=, 00:08:34.873 07:54:40 -- accel/accel.sh@42 -- # jq -r . 00:08:34.873 [2024-07-13 07:54:40.512953] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:34.873 [2024-07-13 07:54:40.513149] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54991 ] 00:08:34.873 [2024-07-13 07:54:40.645908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.132 [2024-07-13 07:54:40.699722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.507 07:54:41 -- accel/accel.sh@18 -- # out=' 00:08:36.507 SPDK Configuration: 00:08:36.507 Core mask: 0x1 00:08:36.507 00:08:36.507 Accel Perf Configuration: 00:08:36.507 Workload Type: xor 00:08:36.507 Source buffers: 2 00:08:36.507 Transfer size: 4096 bytes 00:08:36.507 Vector count 1 00:08:36.507 Module: software 00:08:36.507 Queue depth: 32 00:08:36.507 Allocate depth: 32 00:08:36.507 # threads/core: 1 00:08:36.507 Run time: 1 seconds 00:08:36.507 Verify: Yes 00:08:36.507 00:08:36.507 Running for 1 seconds... 00:08:36.507 00:08:36.507 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:36.507 ------------------------------------------------------------------------------------ 00:08:36.507 0,0 42112/s 164 MiB/s 0 0 00:08:36.507 ==================================================================================== 00:08:36.507 Total 42112/s 164 MiB/s 0 0' 00:08:36.507 07:54:41 -- accel/accel.sh@20 -- # IFS=: 00:08:36.507 07:54:41 -- accel/accel.sh@20 -- # read -r var val 00:08:36.507 07:54:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:36.507 07:54:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:36.507 07:54:41 -- accel/accel.sh@12 -- # build_accel_config 00:08:36.507 07:54:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:36.507 07:54:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:36.507 07:54:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:36.507 07:54:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:36.507 07:54:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:36.507 07:54:41 -- accel/accel.sh@41 -- # local IFS=, 00:08:36.507 07:54:41 -- accel/accel.sh@42 -- # jq -r . 00:08:36.507 [2024-07-13 07:54:42.026807] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:36.507 [2024-07-13 07:54:42.027000] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55012 ] 00:08:36.507 [2024-07-13 07:54:42.159971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.507 [2024-07-13 07:54:42.213696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.507 07:54:42 -- accel/accel.sh@21 -- # val= 00:08:36.507 07:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # IFS=: 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # read -r var val 00:08:36.507 07:54:42 -- accel/accel.sh@21 -- # val= 00:08:36.507 07:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # IFS=: 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # read -r var val 00:08:36.507 07:54:42 -- accel/accel.sh@21 -- # val=0x1 00:08:36.507 07:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # IFS=: 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # read -r var val 00:08:36.507 07:54:42 -- accel/accel.sh@21 -- # val= 00:08:36.507 07:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # IFS=: 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # read -r var val 00:08:36.507 07:54:42 -- accel/accel.sh@21 -- # val= 00:08:36.507 07:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # IFS=: 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # read -r var val 00:08:36.507 07:54:42 -- accel/accel.sh@21 -- # val=xor 00:08:36.507 07:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.507 07:54:42 -- accel/accel.sh@24 -- # accel_opc=xor 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # IFS=: 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # read -r var val 00:08:36.507 07:54:42 -- accel/accel.sh@21 -- # val=2 00:08:36.507 07:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # IFS=: 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # read -r var val 00:08:36.507 07:54:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:36.507 07:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # IFS=: 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # read -r var val 00:08:36.507 07:54:42 -- accel/accel.sh@21 -- # val= 00:08:36.507 07:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # IFS=: 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # read -r var val 00:08:36.507 07:54:42 -- accel/accel.sh@21 -- # val=software 00:08:36.507 07:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.507 07:54:42 -- accel/accel.sh@23 -- # accel_module=software 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # IFS=: 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # read -r var val 00:08:36.507 07:54:42 -- accel/accel.sh@21 -- # val=32 00:08:36.507 07:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # IFS=: 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # read -r var val 00:08:36.507 07:54:42 -- accel/accel.sh@21 -- # val=32 00:08:36.507 07:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # IFS=: 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # read -r var val 00:08:36.507 07:54:42 -- accel/accel.sh@21 -- # val=1 00:08:36.507 07:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # IFS=: 00:08:36.507 07:54:42 -- accel/accel.sh@20 -- # read -r var val 00:08:36.508 07:54:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:36.508 07:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.508 07:54:42 -- accel/accel.sh@20 -- # IFS=: 00:08:36.508 07:54:42 -- accel/accel.sh@20 -- # read -r var val 00:08:36.508 07:54:42 -- accel/accel.sh@21 -- # val=Yes 00:08:36.508 07:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.508 07:54:42 -- accel/accel.sh@20 -- # IFS=: 00:08:36.508 07:54:42 -- accel/accel.sh@20 -- # read -r var val 00:08:36.508 07:54:42 -- accel/accel.sh@21 -- # val= 00:08:36.508 07:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.508 07:54:42 -- accel/accel.sh@20 -- # IFS=: 00:08:36.508 07:54:42 -- accel/accel.sh@20 -- # read -r var val 00:08:36.508 07:54:42 -- accel/accel.sh@21 -- # val= 00:08:36.508 07:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.508 07:54:42 -- accel/accel.sh@20 -- # IFS=: 00:08:36.508 07:54:42 -- accel/accel.sh@20 -- # read -r var val 00:08:37.884 07:54:43 -- accel/accel.sh@21 -- # val= 00:08:37.884 07:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.884 07:54:43 -- accel/accel.sh@20 -- # IFS=: 00:08:37.884 07:54:43 -- accel/accel.sh@20 -- # read -r var val 00:08:37.884 07:54:43 -- accel/accel.sh@21 -- # val= 00:08:37.884 07:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.884 07:54:43 -- accel/accel.sh@20 -- # IFS=: 00:08:37.884 07:54:43 -- accel/accel.sh@20 -- # read -r var val 00:08:37.884 07:54:43 -- accel/accel.sh@21 -- # val= 00:08:37.884 07:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.884 07:54:43 -- accel/accel.sh@20 -- # IFS=: 00:08:37.884 07:54:43 -- accel/accel.sh@20 -- # read -r var val 00:08:37.884 07:54:43 -- accel/accel.sh@21 -- # val= 00:08:37.884 07:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.884 07:54:43 -- accel/accel.sh@20 -- # IFS=: 00:08:37.884 07:54:43 -- accel/accel.sh@20 -- # read -r var val 00:08:37.884 07:54:43 -- accel/accel.sh@21 -- # val= 00:08:37.884 07:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.884 07:54:43 -- accel/accel.sh@20 -- # IFS=: 00:08:37.884 07:54:43 -- accel/accel.sh@20 -- # read -r var val 00:08:37.884 07:54:43 -- accel/accel.sh@21 -- # val= 00:08:37.884 07:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.885 07:54:43 -- accel/accel.sh@20 -- # IFS=: 00:08:37.885 07:54:43 -- accel/accel.sh@20 -- # read -r var val 00:08:37.885 ************************************ 00:08:37.885 END TEST accel_xor 00:08:37.885 ************************************ 00:08:37.885 07:54:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:37.885 07:54:43 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:08:37.885 07:54:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:37.885 00:08:37.885 real 0m3.024s 00:08:37.885 user 0m2.470s 00:08:37.885 sys 0m0.267s 00:08:37.885 07:54:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.885 07:54:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.885 07:54:43 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:37.885 07:54:43 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:37.885 07:54:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:37.885 07:54:43 -- common/autotest_common.sh@10 -- # set +x 00:08:37.885 ************************************ 00:08:37.885 START TEST accel_xor 00:08:37.885 ************************************ 00:08:37.885 07:54:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:08:37.885 07:54:43 -- accel/accel.sh@16 -- # local accel_opc 00:08:37.885 07:54:43 -- accel/accel.sh@17 -- # local accel_module 00:08:37.885 07:54:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:08:37.885 07:54:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:37.885 07:54:43 -- accel/accel.sh@12 -- # build_accel_config 00:08:37.885 07:54:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:37.885 07:54:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:37.885 07:54:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:37.885 07:54:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:37.885 07:54:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:37.885 07:54:43 -- accel/accel.sh@41 -- # local IFS=, 00:08:37.885 07:54:43 -- accel/accel.sh@42 -- # jq -r . 00:08:37.885 [2024-07-13 07:54:43.582289] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:37.885 [2024-07-13 07:54:43.582500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55054 ] 00:08:38.143 [2024-07-13 07:54:43.716081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.143 [2024-07-13 07:54:43.769957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.519 07:54:44 -- accel/accel.sh@18 -- # out=' 00:08:39.519 SPDK Configuration: 00:08:39.519 Core mask: 0x1 00:08:39.519 00:08:39.519 Accel Perf Configuration: 00:08:39.519 Workload Type: xor 00:08:39.519 Source buffers: 3 00:08:39.519 Transfer size: 4096 bytes 00:08:39.519 Vector count 1 00:08:39.519 Module: software 00:08:39.519 Queue depth: 32 00:08:39.519 Allocate depth: 32 00:08:39.519 # threads/core: 1 00:08:39.519 Run time: 1 seconds 00:08:39.519 Verify: Yes 00:08:39.519 00:08:39.519 Running for 1 seconds... 00:08:39.519 00:08:39.519 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:39.519 ------------------------------------------------------------------------------------ 00:08:39.519 0,0 31136/s 121 MiB/s 0 0 00:08:39.519 ==================================================================================== 00:08:39.519 Total 31136/s 121 MiB/s 0 0' 00:08:39.519 07:54:44 -- accel/accel.sh@20 -- # IFS=: 00:08:39.519 07:54:44 -- accel/accel.sh@20 -- # read -r var val 00:08:39.519 07:54:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:39.519 07:54:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:39.519 07:54:44 -- accel/accel.sh@12 -- # build_accel_config 00:08:39.519 07:54:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:39.519 07:54:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:39.519 07:54:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:39.519 07:54:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:39.519 07:54:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:39.519 07:54:44 -- accel/accel.sh@41 -- # local IFS=, 00:08:39.519 07:54:44 -- accel/accel.sh@42 -- # jq -r . 00:08:39.519 [2024-07-13 07:54:45.087996] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:39.519 [2024-07-13 07:54:45.088183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55083 ] 00:08:39.519 [2024-07-13 07:54:45.218116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.519 [2024-07-13 07:54:45.272001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.519 07:54:45 -- accel/accel.sh@21 -- # val= 00:08:39.519 07:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.519 07:54:45 -- accel/accel.sh@20 -- # IFS=: 00:08:39.519 07:54:45 -- accel/accel.sh@20 -- # read -r var val 00:08:39.519 07:54:45 -- accel/accel.sh@21 -- # val= 00:08:39.519 07:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.519 07:54:45 -- accel/accel.sh@20 -- # IFS=: 00:08:39.519 07:54:45 -- accel/accel.sh@20 -- # read -r var val 00:08:39.519 07:54:45 -- accel/accel.sh@21 -- # val=0x1 00:08:39.519 07:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.519 07:54:45 -- accel/accel.sh@20 -- # IFS=: 00:08:39.519 07:54:45 -- accel/accel.sh@20 -- # read -r var val 00:08:39.519 07:54:45 -- accel/accel.sh@21 -- # val= 00:08:39.519 07:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.519 07:54:45 -- accel/accel.sh@20 -- # IFS=: 00:08:39.519 07:54:45 -- accel/accel.sh@20 -- # read -r var val 00:08:39.519 07:54:45 -- accel/accel.sh@21 -- # val= 00:08:39.519 07:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.519 07:54:45 -- accel/accel.sh@20 -- # IFS=: 00:08:39.519 07:54:45 -- accel/accel.sh@20 -- # read -r var val 00:08:39.519 07:54:45 -- accel/accel.sh@21 -- # val=xor 00:08:39.519 07:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.519 07:54:45 -- accel/accel.sh@24 -- # accel_opc=xor 00:08:39.519 07:54:45 -- accel/accel.sh@20 -- # IFS=: 00:08:39.519 07:54:45 -- accel/accel.sh@20 -- # read -r var val 00:08:39.519 07:54:45 -- accel/accel.sh@21 -- # val=3 00:08:39.519 07:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.519 07:54:45 -- accel/accel.sh@20 -- # IFS=: 00:08:39.519 07:54:45 -- accel/accel.sh@20 -- # read -r var val 00:08:39.519 07:54:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:39.519 07:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.519 07:54:45 -- accel/accel.sh@20 -- # IFS=: 00:08:39.519 07:54:45 -- accel/accel.sh@20 -- # read -r var val 00:08:39.519 07:54:45 -- accel/accel.sh@21 -- # val= 00:08:39.519 07:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.519 07:54:45 -- accel/accel.sh@20 -- # IFS=: 00:08:39.519 07:54:45 -- accel/accel.sh@20 -- # read -r var val 00:08:39.778 07:54:45 -- accel/accel.sh@21 -- # val=software 00:08:39.778 07:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.778 07:54:45 -- accel/accel.sh@23 -- # accel_module=software 00:08:39.778 07:54:45 -- accel/accel.sh@20 -- # IFS=: 00:08:39.778 07:54:45 -- accel/accel.sh@20 -- # read -r var val 00:08:39.778 07:54:45 -- accel/accel.sh@21 -- # val=32 00:08:39.778 07:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.778 07:54:45 -- accel/accel.sh@20 -- # IFS=: 00:08:39.778 07:54:45 -- accel/accel.sh@20 -- # read -r var val 00:08:39.778 07:54:45 -- accel/accel.sh@21 -- # val=32 00:08:39.778 07:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.778 07:54:45 -- accel/accel.sh@20 -- # IFS=: 00:08:39.778 07:54:45 -- accel/accel.sh@20 -- # read -r var val 00:08:39.778 07:54:45 -- accel/accel.sh@21 -- # val=1 00:08:39.778 07:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.778 07:54:45 -- accel/accel.sh@20 -- # IFS=: 00:08:39.778 07:54:45 -- accel/accel.sh@20 -- # read -r var val 00:08:39.778 07:54:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:39.778 07:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.778 07:54:45 -- accel/accel.sh@20 -- # IFS=: 00:08:39.778 07:54:45 -- accel/accel.sh@20 -- # read -r var val 00:08:39.778 07:54:45 -- accel/accel.sh@21 -- # val=Yes 00:08:39.778 07:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.778 07:54:45 -- accel/accel.sh@20 -- # IFS=: 00:08:39.778 07:54:45 -- accel/accel.sh@20 -- # read -r var val 00:08:39.778 07:54:45 -- accel/accel.sh@21 -- # val= 00:08:39.778 07:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.778 07:54:45 -- accel/accel.sh@20 -- # IFS=: 00:08:39.778 07:54:45 -- accel/accel.sh@20 -- # read -r var val 00:08:39.778 07:54:45 -- accel/accel.sh@21 -- # val= 00:08:39.778 07:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.778 07:54:45 -- accel/accel.sh@20 -- # IFS=: 00:08:39.779 07:54:45 -- accel/accel.sh@20 -- # read -r var val 00:08:40.715 07:54:46 -- accel/accel.sh@21 -- # val= 00:08:40.715 07:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.715 07:54:46 -- accel/accel.sh@20 -- # IFS=: 00:08:40.715 07:54:46 -- accel/accel.sh@20 -- # read -r var val 00:08:40.715 07:54:46 -- accel/accel.sh@21 -- # val= 00:08:40.715 07:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.715 07:54:46 -- accel/accel.sh@20 -- # IFS=: 00:08:40.715 07:54:46 -- accel/accel.sh@20 -- # read -r var val 00:08:40.715 07:54:46 -- accel/accel.sh@21 -- # val= 00:08:40.715 07:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.715 07:54:46 -- accel/accel.sh@20 -- # IFS=: 00:08:40.715 07:54:46 -- accel/accel.sh@20 -- # read -r var val 00:08:40.715 07:54:46 -- accel/accel.sh@21 -- # val= 00:08:40.715 07:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.715 07:54:46 -- accel/accel.sh@20 -- # IFS=: 00:08:40.715 07:54:46 -- accel/accel.sh@20 -- # read -r var val 00:08:40.715 07:54:46 -- accel/accel.sh@21 -- # val= 00:08:40.715 07:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.715 07:54:46 -- accel/accel.sh@20 -- # IFS=: 00:08:40.715 07:54:46 -- accel/accel.sh@20 -- # read -r var val 00:08:40.715 07:54:46 -- accel/accel.sh@21 -- # val= 00:08:40.715 07:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.715 07:54:46 -- accel/accel.sh@20 -- # IFS=: 00:08:40.715 07:54:46 -- accel/accel.sh@20 -- # read -r var val 00:08:40.715 07:54:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:40.715 07:54:46 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:08:40.715 07:54:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:40.715 00:08:40.715 real 0m3.010s 00:08:40.715 user 0m2.450s 00:08:40.715 sys 0m0.269s 00:08:40.715 07:54:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.715 07:54:46 -- common/autotest_common.sh@10 -- # set +x 00:08:40.715 ************************************ 00:08:40.715 END TEST accel_xor 00:08:40.715 ************************************ 00:08:40.715 07:54:46 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:40.715 07:54:46 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:40.715 07:54:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:40.715 07:54:46 -- common/autotest_common.sh@10 -- # set +x 00:08:40.715 ************************************ 00:08:40.715 START TEST accel_dif_verify 00:08:40.715 ************************************ 00:08:40.715 07:54:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:08:40.715 07:54:46 -- accel/accel.sh@16 -- # local accel_opc 00:08:40.715 07:54:46 -- accel/accel.sh@17 -- # local accel_module 00:08:40.715 07:54:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:08:40.715 07:54:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:40.715 07:54:46 -- accel/accel.sh@12 -- # build_accel_config 00:08:40.715 07:54:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:40.715 07:54:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:40.715 07:54:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:40.715 07:54:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:40.715 07:54:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:40.715 07:54:46 -- accel/accel.sh@41 -- # local IFS=, 00:08:40.715 07:54:46 -- accel/accel.sh@42 -- # jq -r . 00:08:41.000 [2024-07-13 07:54:46.638439] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:41.000 [2024-07-13 07:54:46.638659] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55125 ] 00:08:41.000 [2024-07-13 07:54:46.769627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.258 [2024-07-13 07:54:46.823775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.636 07:54:48 -- accel/accel.sh@18 -- # out=' 00:08:42.636 SPDK Configuration: 00:08:42.636 Core mask: 0x1 00:08:42.636 00:08:42.636 Accel Perf Configuration: 00:08:42.636 Workload Type: dif_verify 00:08:42.636 Vector size: 4096 bytes 00:08:42.636 Transfer size: 4096 bytes 00:08:42.636 Block size: 512 bytes 00:08:42.636 Metadata size: 8 bytes 00:08:42.636 Vector count 1 00:08:42.636 Module: software 00:08:42.636 Queue depth: 32 00:08:42.636 Allocate depth: 32 00:08:42.636 # threads/core: 1 00:08:42.636 Run time: 1 seconds 00:08:42.636 Verify: No 00:08:42.636 00:08:42.636 Running for 1 seconds... 00:08:42.636 00:08:42.636 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:42.636 ------------------------------------------------------------------------------------ 00:08:42.636 0,0 54592/s 216 MiB/s 0 0 00:08:42.636 ==================================================================================== 00:08:42.636 Total 54592/s 213 MiB/s 0 0' 00:08:42.636 07:54:48 -- accel/accel.sh@20 -- # IFS=: 00:08:42.636 07:54:48 -- accel/accel.sh@20 -- # read -r var val 00:08:42.636 07:54:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:42.636 07:54:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:42.636 07:54:48 -- accel/accel.sh@12 -- # build_accel_config 00:08:42.636 07:54:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:42.636 07:54:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:42.636 07:54:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:42.636 07:54:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:42.636 07:54:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:42.636 07:54:48 -- accel/accel.sh@41 -- # local IFS=, 00:08:42.636 07:54:48 -- accel/accel.sh@42 -- # jq -r . 00:08:42.636 [2024-07-13 07:54:48.145639] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:42.636 [2024-07-13 07:54:48.145845] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55152 ] 00:08:42.636 [2024-07-13 07:54:48.280662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.636 [2024-07-13 07:54:48.334802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.636 07:54:48 -- accel/accel.sh@21 -- # val= 00:08:42.636 07:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.636 07:54:48 -- accel/accel.sh@20 -- # IFS=: 00:08:42.636 07:54:48 -- accel/accel.sh@20 -- # read -r var val 00:08:42.636 07:54:48 -- accel/accel.sh@21 -- # val= 00:08:42.636 07:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.636 07:54:48 -- accel/accel.sh@20 -- # IFS=: 00:08:42.636 07:54:48 -- accel/accel.sh@20 -- # read -r var val 00:08:42.636 07:54:48 -- accel/accel.sh@21 -- # val=0x1 00:08:42.636 07:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.636 07:54:48 -- accel/accel.sh@20 -- # IFS=: 00:08:42.636 07:54:48 -- accel/accel.sh@20 -- # read -r var val 00:08:42.636 07:54:48 -- accel/accel.sh@21 -- # val= 00:08:42.636 07:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.636 07:54:48 -- accel/accel.sh@20 -- # IFS=: 00:08:42.636 07:54:48 -- accel/accel.sh@20 -- # read -r var val 00:08:42.636 07:54:48 -- accel/accel.sh@21 -- # val= 00:08:42.636 07:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.636 07:54:48 -- accel/accel.sh@20 -- # IFS=: 00:08:42.636 07:54:48 -- accel/accel.sh@20 -- # read -r var val 00:08:42.636 07:54:48 -- accel/accel.sh@21 -- # val=dif_verify 00:08:42.636 07:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.636 07:54:48 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:08:42.636 07:54:48 -- accel/accel.sh@20 -- # IFS=: 00:08:42.636 07:54:48 -- accel/accel.sh@20 -- # read -r var val 00:08:42.636 07:54:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:42.636 07:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.636 07:54:48 -- accel/accel.sh@20 -- # IFS=: 00:08:42.636 07:54:48 -- accel/accel.sh@20 -- # read -r var val 00:08:42.636 07:54:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:42.636 07:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.636 07:54:48 -- accel/accel.sh@20 -- # IFS=: 00:08:42.636 07:54:48 -- accel/accel.sh@20 -- # read -r var val 00:08:42.636 07:54:48 -- accel/accel.sh@21 -- # val='512 bytes' 00:08:42.636 07:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.636 07:54:48 -- accel/accel.sh@20 -- # IFS=: 00:08:42.636 07:54:48 -- accel/accel.sh@20 -- # read -r var val 00:08:42.636 07:54:48 -- accel/accel.sh@21 -- # val='8 bytes' 00:08:42.636 07:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.636 07:54:48 -- accel/accel.sh@20 -- # IFS=: 00:08:42.636 07:54:48 -- accel/accel.sh@20 -- # read -r var val 00:08:42.636 07:54:48 -- accel/accel.sh@21 -- # val= 00:08:42.637 07:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.637 07:54:48 -- accel/accel.sh@20 -- # IFS=: 00:08:42.637 07:54:48 -- accel/accel.sh@20 -- # read -r var val 00:08:42.637 07:54:48 -- accel/accel.sh@21 -- # val=software 00:08:42.637 07:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.637 07:54:48 -- accel/accel.sh@23 -- # accel_module=software 00:08:42.637 07:54:48 -- accel/accel.sh@20 -- # IFS=: 00:08:42.637 07:54:48 -- accel/accel.sh@20 -- # read -r var val 00:08:42.637 07:54:48 -- accel/accel.sh@21 -- # val=32 00:08:42.637 07:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.637 07:54:48 -- accel/accel.sh@20 -- # IFS=: 00:08:42.637 07:54:48 -- accel/accel.sh@20 -- # read -r var val 00:08:42.637 07:54:48 -- accel/accel.sh@21 -- # val=32 00:08:42.637 07:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.637 07:54:48 -- accel/accel.sh@20 -- # IFS=: 00:08:42.637 07:54:48 -- accel/accel.sh@20 -- # read -r var val 00:08:42.637 07:54:48 -- accel/accel.sh@21 -- # val=1 00:08:42.637 07:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.637 07:54:48 -- accel/accel.sh@20 -- # IFS=: 00:08:42.637 07:54:48 -- accel/accel.sh@20 -- # read -r var val 00:08:42.637 07:54:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:42.637 07:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.637 07:54:48 -- accel/accel.sh@20 -- # IFS=: 00:08:42.637 07:54:48 -- accel/accel.sh@20 -- # read -r var val 00:08:42.637 07:54:48 -- accel/accel.sh@21 -- # val=No 00:08:42.637 07:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.637 07:54:48 -- accel/accel.sh@20 -- # IFS=: 00:08:42.637 07:54:48 -- accel/accel.sh@20 -- # read -r var val 00:08:42.637 07:54:48 -- accel/accel.sh@21 -- # val= 00:08:42.637 07:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.637 07:54:48 -- accel/accel.sh@20 -- # IFS=: 00:08:42.637 07:54:48 -- accel/accel.sh@20 -- # read -r var val 00:08:42.637 07:54:48 -- accel/accel.sh@21 -- # val= 00:08:42.637 07:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.637 07:54:48 -- accel/accel.sh@20 -- # IFS=: 00:08:42.637 07:54:48 -- accel/accel.sh@20 -- # read -r var val 00:08:44.015 07:54:49 -- accel/accel.sh@21 -- # val= 00:08:44.015 07:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.015 07:54:49 -- accel/accel.sh@20 -- # IFS=: 00:08:44.015 07:54:49 -- accel/accel.sh@20 -- # read -r var val 00:08:44.015 07:54:49 -- accel/accel.sh@21 -- # val= 00:08:44.015 07:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.015 07:54:49 -- accel/accel.sh@20 -- # IFS=: 00:08:44.015 07:54:49 -- accel/accel.sh@20 -- # read -r var val 00:08:44.015 07:54:49 -- accel/accel.sh@21 -- # val= 00:08:44.015 07:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.015 07:54:49 -- accel/accel.sh@20 -- # IFS=: 00:08:44.015 07:54:49 -- accel/accel.sh@20 -- # read -r var val 00:08:44.015 07:54:49 -- accel/accel.sh@21 -- # val= 00:08:44.015 07:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.015 07:54:49 -- accel/accel.sh@20 -- # IFS=: 00:08:44.015 07:54:49 -- accel/accel.sh@20 -- # read -r var val 00:08:44.015 07:54:49 -- accel/accel.sh@21 -- # val= 00:08:44.015 07:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.015 07:54:49 -- accel/accel.sh@20 -- # IFS=: 00:08:44.015 07:54:49 -- accel/accel.sh@20 -- # read -r var val 00:08:44.015 07:54:49 -- accel/accel.sh@21 -- # val= 00:08:44.015 07:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.015 07:54:49 -- accel/accel.sh@20 -- # IFS=: 00:08:44.015 07:54:49 -- accel/accel.sh@20 -- # read -r var val 00:08:44.015 ************************************ 00:08:44.015 END TEST accel_dif_verify 00:08:44.015 ************************************ 00:08:44.015 07:54:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:44.015 07:54:49 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:08:44.015 07:54:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:44.015 00:08:44.015 real 0m3.008s 00:08:44.015 user 0m2.474s 00:08:44.015 sys 0m0.252s 00:08:44.015 07:54:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.015 07:54:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.015 07:54:49 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:44.015 07:54:49 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:44.015 07:54:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:44.015 07:54:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.015 ************************************ 00:08:44.015 START TEST accel_dif_generate 00:08:44.015 ************************************ 00:08:44.015 07:54:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:08:44.015 07:54:49 -- accel/accel.sh@16 -- # local accel_opc 00:08:44.015 07:54:49 -- accel/accel.sh@17 -- # local accel_module 00:08:44.015 07:54:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:08:44.015 07:54:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:44.015 07:54:49 -- accel/accel.sh@12 -- # build_accel_config 00:08:44.015 07:54:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:44.015 07:54:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:44.015 07:54:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:44.015 07:54:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:44.015 07:54:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:44.015 07:54:49 -- accel/accel.sh@41 -- # local IFS=, 00:08:44.015 07:54:49 -- accel/accel.sh@42 -- # jq -r . 00:08:44.015 [2024-07-13 07:54:49.699766] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:44.015 [2024-07-13 07:54:49.699980] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55194 ] 00:08:44.275 [2024-07-13 07:54:49.840214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.275 [2024-07-13 07:54:49.893888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.654 07:54:51 -- accel/accel.sh@18 -- # out=' 00:08:45.654 SPDK Configuration: 00:08:45.654 Core mask: 0x1 00:08:45.654 00:08:45.654 Accel Perf Configuration: 00:08:45.654 Workload Type: dif_generate 00:08:45.654 Vector size: 4096 bytes 00:08:45.654 Transfer size: 4096 bytes 00:08:45.654 Block size: 512 bytes 00:08:45.654 Metadata size: 8 bytes 00:08:45.654 Vector count 1 00:08:45.654 Module: software 00:08:45.654 Queue depth: 32 00:08:45.654 Allocate depth: 32 00:08:45.654 # threads/core: 1 00:08:45.654 Run time: 1 seconds 00:08:45.654 Verify: No 00:08:45.654 00:08:45.654 Running for 1 seconds... 00:08:45.654 00:08:45.654 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:45.654 ------------------------------------------------------------------------------------ 00:08:45.654 0,0 54176/s 214 MiB/s 0 0 00:08:45.654 ==================================================================================== 00:08:45.654 Total 54176/s 211 MiB/s 0 0' 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # IFS=: 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # read -r var val 00:08:45.654 07:54:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:45.654 07:54:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:45.654 07:54:51 -- accel/accel.sh@12 -- # build_accel_config 00:08:45.654 07:54:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:45.654 07:54:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:45.654 07:54:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:45.654 07:54:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:45.654 07:54:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:45.654 07:54:51 -- accel/accel.sh@41 -- # local IFS=, 00:08:45.654 07:54:51 -- accel/accel.sh@42 -- # jq -r . 00:08:45.654 [2024-07-13 07:54:51.211636] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:45.654 [2024-07-13 07:54:51.211829] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55216 ] 00:08:45.654 [2024-07-13 07:54:51.344302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.654 [2024-07-13 07:54:51.398738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.654 07:54:51 -- accel/accel.sh@21 -- # val= 00:08:45.654 07:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # IFS=: 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # read -r var val 00:08:45.654 07:54:51 -- accel/accel.sh@21 -- # val= 00:08:45.654 07:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # IFS=: 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # read -r var val 00:08:45.654 07:54:51 -- accel/accel.sh@21 -- # val=0x1 00:08:45.654 07:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # IFS=: 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # read -r var val 00:08:45.654 07:54:51 -- accel/accel.sh@21 -- # val= 00:08:45.654 07:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # IFS=: 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # read -r var val 00:08:45.654 07:54:51 -- accel/accel.sh@21 -- # val= 00:08:45.654 07:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # IFS=: 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # read -r var val 00:08:45.654 07:54:51 -- accel/accel.sh@21 -- # val=dif_generate 00:08:45.654 07:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.654 07:54:51 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # IFS=: 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # read -r var val 00:08:45.654 07:54:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:45.654 07:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # IFS=: 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # read -r var val 00:08:45.654 07:54:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:45.654 07:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # IFS=: 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # read -r var val 00:08:45.654 07:54:51 -- accel/accel.sh@21 -- # val='512 bytes' 00:08:45.654 07:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # IFS=: 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # read -r var val 00:08:45.654 07:54:51 -- accel/accel.sh@21 -- # val='8 bytes' 00:08:45.654 07:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # IFS=: 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # read -r var val 00:08:45.654 07:54:51 -- accel/accel.sh@21 -- # val= 00:08:45.654 07:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # IFS=: 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # read -r var val 00:08:45.654 07:54:51 -- accel/accel.sh@21 -- # val=software 00:08:45.654 07:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.654 07:54:51 -- accel/accel.sh@23 -- # accel_module=software 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # IFS=: 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # read -r var val 00:08:45.654 07:54:51 -- accel/accel.sh@21 -- # val=32 00:08:45.654 07:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # IFS=: 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # read -r var val 00:08:45.654 07:54:51 -- accel/accel.sh@21 -- # val=32 00:08:45.654 07:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # IFS=: 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # read -r var val 00:08:45.654 07:54:51 -- accel/accel.sh@21 -- # val=1 00:08:45.654 07:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # IFS=: 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # read -r var val 00:08:45.654 07:54:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:45.654 07:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # IFS=: 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # read -r var val 00:08:45.654 07:54:51 -- accel/accel.sh@21 -- # val=No 00:08:45.654 07:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # IFS=: 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # read -r var val 00:08:45.654 07:54:51 -- accel/accel.sh@21 -- # val= 00:08:45.654 07:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # IFS=: 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # read -r var val 00:08:45.654 07:54:51 -- accel/accel.sh@21 -- # val= 00:08:45.654 07:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # IFS=: 00:08:45.654 07:54:51 -- accel/accel.sh@20 -- # read -r var val 00:08:47.053 07:54:52 -- accel/accel.sh@21 -- # val= 00:08:47.053 07:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.053 07:54:52 -- accel/accel.sh@20 -- # IFS=: 00:08:47.053 07:54:52 -- accel/accel.sh@20 -- # read -r var val 00:08:47.053 07:54:52 -- accel/accel.sh@21 -- # val= 00:08:47.053 07:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.053 07:54:52 -- accel/accel.sh@20 -- # IFS=: 00:08:47.053 07:54:52 -- accel/accel.sh@20 -- # read -r var val 00:08:47.053 07:54:52 -- accel/accel.sh@21 -- # val= 00:08:47.053 07:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.053 07:54:52 -- accel/accel.sh@20 -- # IFS=: 00:08:47.053 07:54:52 -- accel/accel.sh@20 -- # read -r var val 00:08:47.053 07:54:52 -- accel/accel.sh@21 -- # val= 00:08:47.053 07:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.053 07:54:52 -- accel/accel.sh@20 -- # IFS=: 00:08:47.053 07:54:52 -- accel/accel.sh@20 -- # read -r var val 00:08:47.053 07:54:52 -- accel/accel.sh@21 -- # val= 00:08:47.053 07:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.053 07:54:52 -- accel/accel.sh@20 -- # IFS=: 00:08:47.053 07:54:52 -- accel/accel.sh@20 -- # read -r var val 00:08:47.053 07:54:52 -- accel/accel.sh@21 -- # val= 00:08:47.053 07:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.053 07:54:52 -- accel/accel.sh@20 -- # IFS=: 00:08:47.053 07:54:52 -- accel/accel.sh@20 -- # read -r var val 00:08:47.053 ************************************ 00:08:47.053 END TEST accel_dif_generate 00:08:47.053 ************************************ 00:08:47.053 07:54:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:47.053 07:54:52 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:08:47.053 07:54:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:47.053 00:08:47.053 real 0m3.014s 00:08:47.053 user 0m2.469s 00:08:47.053 sys 0m0.264s 00:08:47.053 07:54:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.053 07:54:52 -- common/autotest_common.sh@10 -- # set +x 00:08:47.053 07:54:52 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:47.053 07:54:52 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:47.053 07:54:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:47.053 07:54:52 -- common/autotest_common.sh@10 -- # set +x 00:08:47.053 ************************************ 00:08:47.053 START TEST accel_dif_generate_copy 00:08:47.053 ************************************ 00:08:47.053 07:54:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:08:47.053 07:54:52 -- accel/accel.sh@16 -- # local accel_opc 00:08:47.053 07:54:52 -- accel/accel.sh@17 -- # local accel_module 00:08:47.053 07:54:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:08:47.053 07:54:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:47.053 07:54:52 -- accel/accel.sh@12 -- # build_accel_config 00:08:47.053 07:54:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:47.053 07:54:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:47.053 07:54:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:47.053 07:54:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:47.053 07:54:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:47.053 07:54:52 -- accel/accel.sh@41 -- # local IFS=, 00:08:47.053 07:54:52 -- accel/accel.sh@42 -- # jq -r . 00:08:47.053 [2024-07-13 07:54:52.764135] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:47.053 [2024-07-13 07:54:52.764339] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55263 ] 00:08:47.312 [2024-07-13 07:54:52.898385] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.312 [2024-07-13 07:54:52.953197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.688 07:54:54 -- accel/accel.sh@18 -- # out=' 00:08:48.688 SPDK Configuration: 00:08:48.688 Core mask: 0x1 00:08:48.688 00:08:48.688 Accel Perf Configuration: 00:08:48.688 Workload Type: dif_generate_copy 00:08:48.688 Vector size: 4096 bytes 00:08:48.688 Transfer size: 4096 bytes 00:08:48.688 Vector count 1 00:08:48.688 Module: software 00:08:48.688 Queue depth: 32 00:08:48.688 Allocate depth: 32 00:08:48.688 # threads/core: 1 00:08:48.688 Run time: 1 seconds 00:08:48.688 Verify: No 00:08:48.689 00:08:48.689 Running for 1 seconds... 00:08:48.689 00:08:48.689 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:48.689 ------------------------------------------------------------------------------------ 00:08:48.689 0,0 48992/s 194 MiB/s 0 0 00:08:48.689 ==================================================================================== 00:08:48.689 Total 48992/s 191 MiB/s 0 0' 00:08:48.689 07:54:54 -- accel/accel.sh@20 -- # IFS=: 00:08:48.689 07:54:54 -- accel/accel.sh@20 -- # read -r var val 00:08:48.689 07:54:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:48.689 07:54:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:48.689 07:54:54 -- accel/accel.sh@12 -- # build_accel_config 00:08:48.689 07:54:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:48.689 07:54:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:48.689 07:54:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:48.689 07:54:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:48.689 07:54:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:48.689 07:54:54 -- accel/accel.sh@41 -- # local IFS=, 00:08:48.689 07:54:54 -- accel/accel.sh@42 -- # jq -r . 00:08:48.689 [2024-07-13 07:54:54.273051] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:48.689 [2024-07-13 07:54:54.273283] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55290 ] 00:08:48.689 [2024-07-13 07:54:54.409896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.689 [2024-07-13 07:54:54.465225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.947 07:54:54 -- accel/accel.sh@21 -- # val= 00:08:48.947 07:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.947 07:54:54 -- accel/accel.sh@20 -- # IFS=: 00:08:48.947 07:54:54 -- accel/accel.sh@20 -- # read -r var val 00:08:48.947 07:54:54 -- accel/accel.sh@21 -- # val= 00:08:48.947 07:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.947 07:54:54 -- accel/accel.sh@20 -- # IFS=: 00:08:48.947 07:54:54 -- accel/accel.sh@20 -- # read -r var val 00:08:48.947 07:54:54 -- accel/accel.sh@21 -- # val=0x1 00:08:48.947 07:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.947 07:54:54 -- accel/accel.sh@20 -- # IFS=: 00:08:48.947 07:54:54 -- accel/accel.sh@20 -- # read -r var val 00:08:48.947 07:54:54 -- accel/accel.sh@21 -- # val= 00:08:48.947 07:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.947 07:54:54 -- accel/accel.sh@20 -- # IFS=: 00:08:48.947 07:54:54 -- accel/accel.sh@20 -- # read -r var val 00:08:48.947 07:54:54 -- accel/accel.sh@21 -- # val= 00:08:48.947 07:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.947 07:54:54 -- accel/accel.sh@20 -- # IFS=: 00:08:48.947 07:54:54 -- accel/accel.sh@20 -- # read -r var val 00:08:48.948 07:54:54 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:08:48.948 07:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.948 07:54:54 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # IFS=: 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # read -r var val 00:08:48.948 07:54:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:48.948 07:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # IFS=: 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # read -r var val 00:08:48.948 07:54:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:48.948 07:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # IFS=: 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # read -r var val 00:08:48.948 07:54:54 -- accel/accel.sh@21 -- # val= 00:08:48.948 07:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # IFS=: 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # read -r var val 00:08:48.948 07:54:54 -- accel/accel.sh@21 -- # val=software 00:08:48.948 07:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.948 07:54:54 -- accel/accel.sh@23 -- # accel_module=software 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # IFS=: 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # read -r var val 00:08:48.948 07:54:54 -- accel/accel.sh@21 -- # val=32 00:08:48.948 07:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # IFS=: 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # read -r var val 00:08:48.948 07:54:54 -- accel/accel.sh@21 -- # val=32 00:08:48.948 07:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # IFS=: 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # read -r var val 00:08:48.948 07:54:54 -- accel/accel.sh@21 -- # val=1 00:08:48.948 07:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # IFS=: 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # read -r var val 00:08:48.948 07:54:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:48.948 07:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # IFS=: 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # read -r var val 00:08:48.948 07:54:54 -- accel/accel.sh@21 -- # val=No 00:08:48.948 07:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # IFS=: 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # read -r var val 00:08:48.948 07:54:54 -- accel/accel.sh@21 -- # val= 00:08:48.948 07:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # IFS=: 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # read -r var val 00:08:48.948 07:54:54 -- accel/accel.sh@21 -- # val= 00:08:48.948 07:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # IFS=: 00:08:48.948 07:54:54 -- accel/accel.sh@20 -- # read -r var val 00:08:49.884 07:54:55 -- accel/accel.sh@21 -- # val= 00:08:49.884 07:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.884 07:54:55 -- accel/accel.sh@20 -- # IFS=: 00:08:49.884 07:54:55 -- accel/accel.sh@20 -- # read -r var val 00:08:49.884 07:54:55 -- accel/accel.sh@21 -- # val= 00:08:49.884 07:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.884 07:54:55 -- accel/accel.sh@20 -- # IFS=: 00:08:49.884 07:54:55 -- accel/accel.sh@20 -- # read -r var val 00:08:49.884 07:54:55 -- accel/accel.sh@21 -- # val= 00:08:49.884 07:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.884 07:54:55 -- accel/accel.sh@20 -- # IFS=: 00:08:49.884 07:54:55 -- accel/accel.sh@20 -- # read -r var val 00:08:49.884 07:54:55 -- accel/accel.sh@21 -- # val= 00:08:49.884 07:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.884 07:54:55 -- accel/accel.sh@20 -- # IFS=: 00:08:49.884 07:54:55 -- accel/accel.sh@20 -- # read -r var val 00:08:49.884 07:54:55 -- accel/accel.sh@21 -- # val= 00:08:49.884 07:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.884 07:54:55 -- accel/accel.sh@20 -- # IFS=: 00:08:49.884 07:54:55 -- accel/accel.sh@20 -- # read -r var val 00:08:49.884 07:54:55 -- accel/accel.sh@21 -- # val= 00:08:49.884 07:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.884 07:54:55 -- accel/accel.sh@20 -- # IFS=: 00:08:49.884 07:54:55 -- accel/accel.sh@20 -- # read -r var val 00:08:49.884 ************************************ 00:08:49.884 END TEST accel_dif_generate_copy 00:08:49.884 ************************************ 00:08:49.884 07:54:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:49.884 07:54:55 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:08:49.884 07:54:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:49.884 00:08:49.884 real 0m3.032s 00:08:49.884 user 0m2.475s 00:08:49.884 sys 0m0.262s 00:08:49.884 07:54:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.884 07:54:55 -- common/autotest_common.sh@10 -- # set +x 00:08:50.147 07:54:55 -- accel/accel.sh@107 -- # [[ n == y ]] 00:08:50.147 07:54:55 -- accel/accel.sh@116 -- # [[ n == y ]] 00:08:50.147 07:54:55 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:50.147 07:54:55 -- accel/accel.sh@129 -- # build_accel_config 00:08:50.147 07:54:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:50.147 07:54:55 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:50.147 07:54:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:50.147 07:54:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:50.147 07:54:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:50.147 07:54:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:50.147 07:54:55 -- common/autotest_common.sh@10 -- # set +x 00:08:50.147 07:54:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:50.147 07:54:55 -- accel/accel.sh@41 -- # local IFS=, 00:08:50.147 07:54:55 -- accel/accel.sh@42 -- # jq -r . 00:08:50.147 ************************************ 00:08:50.147 START TEST accel_dif_functional_tests 00:08:50.147 ************************************ 00:08:50.147 07:54:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:50.147 [2024-07-13 07:54:55.845576] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:50.147 [2024-07-13 07:54:55.845759] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55337 ] 00:08:50.426 [2024-07-13 07:54:55.987497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:50.426 [2024-07-13 07:54:56.038494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.426 [2024-07-13 07:54:56.038650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.426 [2024-07-13 07:54:56.038648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.426 00:08:50.426 00:08:50.426 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.426 http://cunit.sourceforge.net/ 00:08:50.426 00:08:50.426 00:08:50.426 Suite: accel_dif 00:08:50.426 Test: verify: DIF generated, GUARD check ...passed 00:08:50.426 Test: verify: DIF generated, APPTAG check ...passed 00:08:50.426 Test: verify: DIF generated, REFTAG check ...passed 00:08:50.426 Test: verify: DIF not generated, GUARD check ...passed 00:08:50.426 Test: verify: DIF not generated, APPTAG check ...[2024-07-13 07:54:56.128085] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:50.426 [2024-07-13 07:54:56.128227] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:50.426 [2024-07-13 07:54:56.128315] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:50.426 passed 00:08:50.426 Test: verify: DIF not generated, REFTAG check ...passed 00:08:50.426 Test: verify: APPTAG correct, APPTAG check ...[2024-07-13 07:54:56.128422] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:50.426 [2024-07-13 07:54:56.128493] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:50.426 [2024-07-13 07:54:56.128553] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:50.426 passed 00:08:50.426 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:08:50.426 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-07-13 07:54:56.128680] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:50.426 passed 00:08:50.426 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:50.426 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:50.426 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:08:50.427 Test: generate copy: DIF generated, GUARD check ...[2024-07-13 07:54:56.128992] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:50.427 passed 00:08:50.427 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:50.427 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:50.427 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:50.427 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:50.427 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:50.427 Test: generate copy: iovecs-len validate ...passed 00:08:50.427 Test: generate copy: buffer alignment validate ...passed 00:08:50.427 00:08:50.427 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.427 suites 1 1 n/a 0 0 00:08:50.427 tests 20 20 20 0 0 00:08:50.427 asserts 204 204 204 0 n/a 00:08:50.427 00:08:50.427 Elapsed time = 0.010 seconds 00:08:50.427 [2024-07-13 07:54:56.129640] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:50.686 ************************************ 00:08:50.686 END TEST accel_dif_functional_tests 00:08:50.686 ************************************ 00:08:50.686 00:08:50.686 real 0m0.615s 00:08:50.686 user 0m0.676s 00:08:50.686 sys 0m0.178s 00:08:50.686 07:54:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.686 07:54:56 -- common/autotest_common.sh@10 -- # set +x 00:08:50.686 ************************************ 00:08:50.686 END TEST accel 00:08:50.686 00:08:50.686 real 0m43.937s 00:08:50.686 user 0m35.193s 00:08:50.686 sys 0m5.038s 00:08:50.686 07:54:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.686 07:54:56 -- common/autotest_common.sh@10 -- # set +x 00:08:50.686 ************************************ 00:08:50.686 07:54:56 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:50.686 07:54:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:50.686 07:54:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:50.686 07:54:56 -- common/autotest_common.sh@10 -- # set +x 00:08:50.686 ************************************ 00:08:50.686 START TEST accel_rpc 00:08:50.686 ************************************ 00:08:50.686 07:54:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:50.686 * Looking for test storage... 00:08:50.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:50.686 07:54:56 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:50.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.945 07:54:56 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=55408 00:08:50.945 07:54:56 -- accel/accel_rpc.sh@15 -- # waitforlisten 55408 00:08:50.945 07:54:56 -- common/autotest_common.sh@819 -- # '[' -z 55408 ']' 00:08:50.945 07:54:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.945 07:54:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:50.945 07:54:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.945 07:54:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:50.945 07:54:56 -- common/autotest_common.sh@10 -- # set +x 00:08:50.945 07:54:56 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:50.945 [2024-07-13 07:54:56.634026] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:50.945 [2024-07-13 07:54:56.634244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55408 ] 00:08:51.205 [2024-07-13 07:54:56.773812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.205 [2024-07-13 07:54:56.823034] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:51.205 [2024-07-13 07:54:56.823236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.772 07:54:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:51.772 07:54:57 -- common/autotest_common.sh@852 -- # return 0 00:08:51.772 07:54:57 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:51.772 07:54:57 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:51.772 07:54:57 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:51.772 07:54:57 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:51.772 07:54:57 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:51.772 07:54:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:51.772 07:54:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:51.772 07:54:57 -- common/autotest_common.sh@10 -- # set +x 00:08:51.772 ************************************ 00:08:51.772 START TEST accel_assign_opcode 00:08:51.772 ************************************ 00:08:51.772 07:54:57 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:08:51.772 07:54:57 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:51.772 07:54:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.772 07:54:57 -- common/autotest_common.sh@10 -- # set +x 00:08:51.772 [2024-07-13 07:54:57.435608] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:51.772 07:54:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:51.772 07:54:57 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:51.772 07:54:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.772 07:54:57 -- common/autotest_common.sh@10 -- # set +x 00:08:51.772 [2024-07-13 07:54:57.447605] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:51.772 07:54:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:51.772 07:54:57 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:51.772 07:54:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.772 07:54:57 -- common/autotest_common.sh@10 -- # set +x 00:08:52.031 07:54:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.031 07:54:57 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:52.031 07:54:57 -- accel/accel_rpc.sh@42 -- # grep software 00:08:52.031 07:54:57 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:52.031 07:54:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.031 07:54:57 -- common/autotest_common.sh@10 -- # set +x 00:08:52.031 07:54:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.031 software 00:08:52.031 00:08:52.031 real 0m0.273s 00:08:52.031 user 0m0.059s 00:08:52.031 sys 0m0.008s 00:08:52.031 07:54:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.031 ************************************ 00:08:52.031 END TEST accel_assign_opcode 00:08:52.031 ************************************ 00:08:52.031 07:54:57 -- common/autotest_common.sh@10 -- # set +x 00:08:52.031 07:54:57 -- accel/accel_rpc.sh@55 -- # killprocess 55408 00:08:52.031 07:54:57 -- common/autotest_common.sh@926 -- # '[' -z 55408 ']' 00:08:52.031 07:54:57 -- common/autotest_common.sh@930 -- # kill -0 55408 00:08:52.031 07:54:57 -- common/autotest_common.sh@931 -- # uname 00:08:52.031 07:54:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:52.031 07:54:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55408 00:08:52.031 killing process with pid 55408 00:08:52.031 07:54:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:52.031 07:54:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:52.031 07:54:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55408' 00:08:52.031 07:54:57 -- common/autotest_common.sh@945 -- # kill 55408 00:08:52.031 07:54:57 -- common/autotest_common.sh@950 -- # wait 55408 00:08:52.290 00:08:52.290 real 0m1.673s 00:08:52.290 user 0m1.602s 00:08:52.290 sys 0m0.413s 00:08:52.290 07:54:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.290 ************************************ 00:08:52.290 END TEST accel_rpc 00:08:52.290 ************************************ 00:08:52.290 07:54:58 -- common/autotest_common.sh@10 -- # set +x 00:08:52.549 07:54:58 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:52.549 07:54:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:52.549 07:54:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:52.549 07:54:58 -- common/autotest_common.sh@10 -- # set +x 00:08:52.549 ************************************ 00:08:52.549 START TEST app_cmdline 00:08:52.549 ************************************ 00:08:52.549 07:54:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:52.549 * Looking for test storage... 00:08:52.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:52.549 07:54:58 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:52.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.550 07:54:58 -- app/cmdline.sh@17 -- # spdk_tgt_pid=55525 00:08:52.550 07:54:58 -- app/cmdline.sh@18 -- # waitforlisten 55525 00:08:52.550 07:54:58 -- common/autotest_common.sh@819 -- # '[' -z 55525 ']' 00:08:52.550 07:54:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.550 07:54:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:52.550 07:54:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.550 07:54:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:52.550 07:54:58 -- common/autotest_common.sh@10 -- # set +x 00:08:52.550 07:54:58 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:52.808 [2024-07-13 07:54:58.363139] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:52.808 [2024-07-13 07:54:58.363355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55525 ] 00:08:52.808 [2024-07-13 07:54:58.512231] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.808 [2024-07-13 07:54:58.561556] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:52.808 [2024-07-13 07:54:58.561747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.376 07:54:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:53.376 07:54:59 -- common/autotest_common.sh@852 -- # return 0 00:08:53.376 07:54:59 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:53.635 { 00:08:53.635 "version": "SPDK v24.01.1-pre git sha1 4b94202c6", 00:08:53.635 "fields": { 00:08:53.635 "major": 24, 00:08:53.635 "minor": 1, 00:08:53.635 "patch": 1, 00:08:53.635 "suffix": "-pre", 00:08:53.635 "commit": "4b94202c6" 00:08:53.635 } 00:08:53.635 } 00:08:53.635 07:54:59 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:53.635 07:54:59 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:53.635 07:54:59 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:53.635 07:54:59 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:53.635 07:54:59 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:53.635 07:54:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:53.635 07:54:59 -- common/autotest_common.sh@10 -- # set +x 00:08:53.635 07:54:59 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:53.635 07:54:59 -- app/cmdline.sh@26 -- # sort 00:08:53.635 07:54:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:53.635 07:54:59 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:53.635 07:54:59 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:53.635 07:54:59 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:53.635 07:54:59 -- common/autotest_common.sh@640 -- # local es=0 00:08:53.635 07:54:59 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:53.635 07:54:59 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:53.635 07:54:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:53.635 07:54:59 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:53.635 07:54:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:53.635 07:54:59 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:53.635 07:54:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:53.635 07:54:59 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:53.635 07:54:59 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:53.635 07:54:59 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:53.894 request: 00:08:53.894 { 00:08:53.894 "method": "env_dpdk_get_mem_stats", 00:08:53.894 "req_id": 1 00:08:53.894 } 00:08:53.894 Got JSON-RPC error response 00:08:53.894 response: 00:08:53.894 { 00:08:53.894 "code": -32601, 00:08:53.894 "message": "Method not found" 00:08:53.894 } 00:08:53.894 07:54:59 -- common/autotest_common.sh@643 -- # es=1 00:08:53.894 07:54:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:53.894 07:54:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:53.894 07:54:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:53.894 07:54:59 -- app/cmdline.sh@1 -- # killprocess 55525 00:08:53.894 07:54:59 -- common/autotest_common.sh@926 -- # '[' -z 55525 ']' 00:08:53.894 07:54:59 -- common/autotest_common.sh@930 -- # kill -0 55525 00:08:53.894 07:54:59 -- common/autotest_common.sh@931 -- # uname 00:08:53.894 07:54:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:53.894 07:54:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55525 00:08:53.894 killing process with pid 55525 00:08:53.894 07:54:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:53.894 07:54:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:53.894 07:54:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55525' 00:08:53.894 07:54:59 -- common/autotest_common.sh@945 -- # kill 55525 00:08:53.894 07:54:59 -- common/autotest_common.sh@950 -- # wait 55525 00:08:54.460 ************************************ 00:08:54.460 END TEST app_cmdline 00:08:54.460 ************************************ 00:08:54.460 00:08:54.460 real 0m1.903s 00:08:54.460 user 0m2.078s 00:08:54.460 sys 0m0.475s 00:08:54.460 07:55:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.460 07:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:54.460 07:55:00 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:54.460 07:55:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:54.460 07:55:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:54.460 07:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:54.460 ************************************ 00:08:54.460 START TEST version 00:08:54.460 ************************************ 00:08:54.460 07:55:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:54.460 * Looking for test storage... 00:08:54.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:54.460 07:55:00 -- app/version.sh@17 -- # get_header_version major 00:08:54.460 07:55:00 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:54.460 07:55:00 -- app/version.sh@14 -- # tr -d '"' 00:08:54.460 07:55:00 -- app/version.sh@14 -- # cut -f2 00:08:54.460 07:55:00 -- app/version.sh@17 -- # major=24 00:08:54.460 07:55:00 -- app/version.sh@18 -- # get_header_version minor 00:08:54.460 07:55:00 -- app/version.sh@14 -- # cut -f2 00:08:54.460 07:55:00 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:54.460 07:55:00 -- app/version.sh@14 -- # tr -d '"' 00:08:54.460 07:55:00 -- app/version.sh@18 -- # minor=1 00:08:54.460 07:55:00 -- app/version.sh@19 -- # get_header_version patch 00:08:54.460 07:55:00 -- app/version.sh@14 -- # cut -f2 00:08:54.460 07:55:00 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:54.460 07:55:00 -- app/version.sh@14 -- # tr -d '"' 00:08:54.460 07:55:00 -- app/version.sh@19 -- # patch=1 00:08:54.460 07:55:00 -- app/version.sh@20 -- # get_header_version suffix 00:08:54.460 07:55:00 -- app/version.sh@14 -- # cut -f2 00:08:54.460 07:55:00 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:54.461 07:55:00 -- app/version.sh@14 -- # tr -d '"' 00:08:54.461 07:55:00 -- app/version.sh@20 -- # suffix=-pre 00:08:54.461 07:55:00 -- app/version.sh@22 -- # version=24.1 00:08:54.461 07:55:00 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:54.461 07:55:00 -- app/version.sh@25 -- # version=24.1.1 00:08:54.461 07:55:00 -- app/version.sh@28 -- # version=24.1.1rc0 00:08:54.461 07:55:00 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:54.461 07:55:00 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:54.461 07:55:00 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:08:54.461 07:55:00 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:08:54.461 00:08:54.461 real 0m0.142s 00:08:54.461 user 0m0.078s 00:08:54.461 sys 0m0.097s 00:08:54.461 07:55:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.461 ************************************ 00:08:54.461 END TEST version 00:08:54.461 ************************************ 00:08:54.461 07:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:54.461 07:55:00 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:08:54.461 07:55:00 -- spdk/autotest.sh@195 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:08:54.461 07:55:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:54.461 07:55:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:54.461 07:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:54.461 ************************************ 00:08:54.461 START TEST blockdev_general 00:08:54.461 ************************************ 00:08:54.461 07:55:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:08:54.719 * Looking for test storage... 00:08:54.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:54.719 07:55:00 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:54.719 07:55:00 -- bdev/nbd_common.sh@6 -- # set -e 00:08:54.719 07:55:00 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:54.719 07:55:00 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:54.719 07:55:00 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:54.719 07:55:00 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:54.719 07:55:00 -- bdev/blockdev.sh@18 -- # : 00:08:54.719 07:55:00 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:08:54.719 07:55:00 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:08:54.719 07:55:00 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:08:54.719 07:55:00 -- bdev/blockdev.sh@672 -- # uname -s 00:08:54.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.719 07:55:00 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:08:54.719 07:55:00 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:08:54.719 07:55:00 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:08:54.719 07:55:00 -- bdev/blockdev.sh@681 -- # crypto_device= 00:08:54.719 07:55:00 -- bdev/blockdev.sh@682 -- # dek= 00:08:54.719 07:55:00 -- bdev/blockdev.sh@683 -- # env_ctx= 00:08:54.719 07:55:00 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:08:54.719 07:55:00 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:08:54.719 07:55:00 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:08:54.719 07:55:00 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:08:54.719 07:55:00 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:08:54.719 07:55:00 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=55696 00:08:54.719 07:55:00 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:54.719 07:55:00 -- bdev/blockdev.sh@47 -- # waitforlisten 55696 00:08:54.719 07:55:00 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:08:54.719 07:55:00 -- common/autotest_common.sh@819 -- # '[' -z 55696 ']' 00:08:54.720 07:55:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.720 07:55:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:54.720 07:55:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.720 07:55:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:54.720 07:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:54.720 [2024-07-13 07:55:00.489347] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:54.720 [2024-07-13 07:55:00.489548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55696 ] 00:08:54.978 [2024-07-13 07:55:00.618123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.978 [2024-07-13 07:55:00.661765] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:54.979 [2024-07-13 07:55:00.661945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.547 07:55:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:55.547 07:55:01 -- common/autotest_common.sh@852 -- # return 0 00:08:55.547 07:55:01 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:08:55.547 07:55:01 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:08:55.547 07:55:01 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:08:55.547 07:55:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:55.547 07:55:01 -- common/autotest_common.sh@10 -- # set +x 00:08:55.806 [2024-07-13 07:55:01.505277] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:55.806 [2024-07-13 07:55:01.505348] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:55.806 00:08:55.806 [2024-07-13 07:55:01.513246] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:55.806 [2024-07-13 07:55:01.513285] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:55.806 00:08:55.806 Malloc0 00:08:55.806 Malloc1 00:08:55.806 Malloc2 00:08:55.806 Malloc3 00:08:55.806 Malloc4 00:08:55.806 Malloc5 00:08:55.806 Malloc6 00:08:56.065 Malloc7 00:08:56.065 Malloc8 00:08:56.065 Malloc9 00:08:56.065 [2024-07-13 07:55:01.657451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:56.065 [2024-07-13 07:55:01.657516] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.065 [2024-07-13 07:55:01.657546] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ca80 00:08:56.065 [2024-07-13 07:55:01.657575] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.065 [2024-07-13 07:55:01.659188] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.065 [2024-07-13 07:55:01.659236] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:56.065 TestPT 00:08:56.065 07:55:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.065 07:55:01 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:08:56.065 5000+0 records in 00:08:56.065 5000+0 records out 00:08:56.065 10240000 bytes (10 MB) copied, 0.0294735 s, 347 MB/s 00:08:56.065 07:55:01 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:08:56.065 07:55:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.065 07:55:01 -- common/autotest_common.sh@10 -- # set +x 00:08:56.065 AIO0 00:08:56.065 07:55:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.065 07:55:01 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:08:56.065 07:55:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.065 07:55:01 -- common/autotest_common.sh@10 -- # set +x 00:08:56.065 07:55:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.065 07:55:01 -- bdev/blockdev.sh@738 -- # cat 00:08:56.065 07:55:01 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:08:56.065 07:55:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.065 07:55:01 -- common/autotest_common.sh@10 -- # set +x 00:08:56.065 07:55:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.065 07:55:01 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:08:56.065 07:55:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.065 07:55:01 -- common/autotest_common.sh@10 -- # set +x 00:08:56.065 07:55:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.065 07:55:01 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:56.065 07:55:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.065 07:55:01 -- common/autotest_common.sh@10 -- # set +x 00:08:56.065 07:55:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.065 07:55:01 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:08:56.065 07:55:01 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:08:56.065 07:55:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.065 07:55:01 -- common/autotest_common.sh@10 -- # set +x 00:08:56.065 07:55:01 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:08:56.325 07:55:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.325 07:55:01 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:08:56.325 07:55:01 -- bdev/blockdev.sh@747 -- # jq -r .name 00:08:56.326 07:55:01 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "87fdf3d5-fa19-4193-a7a4-2acad2e3d43d"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "87fdf3d5-fa19-4193-a7a4-2acad2e3d43d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "0e04f3fd-def0-5504-942b-8f43f05a1ae8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "0e04f3fd-def0-5504-942b-8f43f05a1ae8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "0c9609ad-b46e-5003-8cc9-2e574d0d84f5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "0c9609ad-b46e-5003-8cc9-2e574d0d84f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "476adf41-dbf6-5697-821b-5eb45f618182"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "476adf41-dbf6-5697-821b-5eb45f618182",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "cd4ca145-900f-5887-8700-a6264ff5db68"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cd4ca145-900f-5887-8700-a6264ff5db68",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "c8cc2394-3b8c-51b9-a898-b9032523c3a0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c8cc2394-3b8c-51b9-a898-b9032523c3a0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "a9947bce-8eae-538d-8337-9d13ced89114"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a9947bce-8eae-538d-8337-9d13ced89114",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7d793c51-736b-501b-8a8f-2a84ceb9e410"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7d793c51-736b-501b-8a8f-2a84ceb9e410",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "8569505a-6c28-52b3-8a0d-4c0306c888d5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8569505a-6c28-52b3-8a0d-4c0306c888d5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "d3a62988-602e-5805-b604-1dce2e731ed4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d3a62988-602e-5805-b604-1dce2e731ed4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "42163303-6378-55ea-8141-5b61f912125a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "42163303-6378-55ea-8141-5b61f912125a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "927553b5-37d3-511d-b151-b1fd32e1c673"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "927553b5-37d3-511d-b151-b1fd32e1c673",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "18e42ca6-bc8c-434b-8805-7fd3e66c4c80"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "18e42ca6-bc8c-434b-8805-7fd3e66c4c80",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "18e42ca6-bc8c-434b-8805-7fd3e66c4c80",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "6c51c52c-2db0-4d26-954d-d36826815aaa",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "d2cc0385-d93f-4262-8848-efe79d22a76b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "afbb1cc5-8f13-4988-bbb1-8fcbb25cf356"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "afbb1cc5-8f13-4988-bbb1-8fcbb25cf356",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "afbb1cc5-8f13-4988-bbb1-8fcbb25cf356",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "683775de-2621-414d-93c7-57ea018e4596",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "018249c6-f325-4afa-b1a4-639bd3e4904f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "ef515ae8-e05b-4b01-95bd-7576fc8c070a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ef515ae8-e05b-4b01-95bd-7576fc8c070a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ef515ae8-e05b-4b01-95bd-7576fc8c070a",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "5d7982fa-0db7-40c1-b15d-d31fda8bdba1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "49a393e9-89a7-44f5-95ab-73f32232f52a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "4051591d-1f64-4564-ba84-b953c56723c4"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "4051591d-1f64-4564-ba84-b953c56723c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:08:56.326 07:55:01 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:08:56.326 07:55:01 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:08:56.326 07:55:01 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:08:56.326 07:55:01 -- bdev/blockdev.sh@752 -- # killprocess 55696 00:08:56.326 07:55:01 -- common/autotest_common.sh@926 -- # '[' -z 55696 ']' 00:08:56.326 07:55:01 -- common/autotest_common.sh@930 -- # kill -0 55696 00:08:56.326 07:55:01 -- common/autotest_common.sh@931 -- # uname 00:08:56.326 07:55:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:56.326 07:55:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55696 00:08:56.326 killing process with pid 55696 00:08:56.326 07:55:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:56.326 07:55:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:56.326 07:55:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55696' 00:08:56.326 07:55:02 -- common/autotest_common.sh@945 -- # kill 55696 00:08:56.326 07:55:02 -- common/autotest_common.sh@950 -- # wait 55696 00:08:56.895 07:55:02 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:56.895 07:55:02 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:08:56.895 07:55:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:56.895 07:55:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:56.895 07:55:02 -- common/autotest_common.sh@10 -- # set +x 00:08:56.895 ************************************ 00:08:56.895 START TEST bdev_hello_world 00:08:56.895 ************************************ 00:08:56.895 07:55:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:08:56.895 [2024-07-13 07:55:02.576140] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:56.895 [2024-07-13 07:55:02.576313] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55750 ] 00:08:57.155 [2024-07-13 07:55:02.706604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.155 [2024-07-13 07:55:02.750090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.155 [2024-07-13 07:55:02.883699] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:57.155 [2024-07-13 07:55:02.883782] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:57.155 [2024-07-13 07:55:02.891642] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:57.155 [2024-07-13 07:55:02.891692] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:57.155 [2024-07-13 07:55:02.899699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:57.155 [2024-07-13 07:55:02.899742] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:57.155 [2024-07-13 07:55:02.899766] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:57.416 [2024-07-13 07:55:02.970635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:57.416 [2024-07-13 07:55:02.970715] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.416 [2024-07-13 07:55:02.970774] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002a980 00:08:57.416 [2024-07-13 07:55:02.970799] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.416 [2024-07-13 07:55:02.972614] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.416 [2024-07-13 07:55:02.972656] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:57.416 [2024-07-13 07:55:03.100482] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:57.416 [2024-07-13 07:55:03.100555] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:08:57.416 [2024-07-13 07:55:03.100644] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:57.416 [2024-07-13 07:55:03.100684] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:57.416 [2024-07-13 07:55:03.100746] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:57.416 [2024-07-13 07:55:03.100787] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:57.416 [2024-07-13 07:55:03.100823] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:57.416 00:08:57.416 [2024-07-13 07:55:03.100848] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:57.675 00:08:57.675 real 0m0.956s 00:08:57.675 user 0m0.494s 00:08:57.675 sys 0m0.252s 00:08:57.675 07:55:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.675 ************************************ 00:08:57.675 END TEST bdev_hello_world 00:08:57.675 ************************************ 00:08:57.675 07:55:03 -- common/autotest_common.sh@10 -- # set +x 00:08:57.675 07:55:03 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:08:57.675 07:55:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:57.675 07:55:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:57.675 07:55:03 -- common/autotest_common.sh@10 -- # set +x 00:08:57.675 ************************************ 00:08:57.675 START TEST bdev_bounds 00:08:57.675 ************************************ 00:08:57.675 07:55:03 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:08:57.675 Process bdevio pid: 55788 00:08:57.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.675 07:55:03 -- bdev/blockdev.sh@288 -- # bdevio_pid=55788 00:08:57.675 07:55:03 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:57.675 07:55:03 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 55788' 00:08:57.675 07:55:03 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:57.675 07:55:03 -- bdev/blockdev.sh@291 -- # waitforlisten 55788 00:08:57.675 07:55:03 -- common/autotest_common.sh@819 -- # '[' -z 55788 ']' 00:08:57.675 07:55:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.675 07:55:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:57.675 07:55:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.675 07:55:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:57.675 07:55:03 -- common/autotest_common.sh@10 -- # set +x 00:08:57.933 [2024-07-13 07:55:03.586113] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:57.933 [2024-07-13 07:55:03.586285] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55788 ] 00:08:57.933 [2024-07-13 07:55:03.721793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:58.191 [2024-07-13 07:55:03.766251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.191 [2024-07-13 07:55:03.766322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.191 [2024-07-13 07:55:03.766320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.191 [2024-07-13 07:55:03.899532] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:58.191 [2024-07-13 07:55:03.899634] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:58.191 [2024-07-13 07:55:03.907483] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:58.191 [2024-07-13 07:55:03.907552] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:58.191 [2024-07-13 07:55:03.915538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:58.191 [2024-07-13 07:55:03.915605] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:58.191 [2024-07-13 07:55:03.915626] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:58.191 [2024-07-13 07:55:03.987101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:58.191 [2024-07-13 07:55:03.987183] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.191 [2024-07-13 07:55:03.987249] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002a980 00:08:58.191 [2024-07-13 07:55:03.987271] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.191 [2024-07-13 07:55:03.989203] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.191 [2024-07-13 07:55:03.989239] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:58.793 07:55:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:58.793 07:55:04 -- common/autotest_common.sh@852 -- # return 0 00:08:58.793 07:55:04 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:58.793 I/O targets: 00:08:58.793 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:08:58.793 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:08:58.793 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:08:58.793 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:08:58.793 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:08:58.793 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:08:58.793 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:08:58.793 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:08:58.793 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:08:58.793 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:08:58.793 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:08:58.793 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:08:58.793 raid0: 131072 blocks of 512 bytes (64 MiB) 00:08:58.793 concat0: 131072 blocks of 512 bytes (64 MiB) 00:08:58.793 raid1: 65536 blocks of 512 bytes (32 MiB) 00:08:58.793 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:08:58.793 00:08:58.793 00:08:58.793 CUnit - A unit testing framework for C - Version 2.1-3 00:08:58.793 http://cunit.sourceforge.net/ 00:08:58.793 00:08:58.793 00:08:58.793 Suite: bdevio tests on: AIO0 00:08:58.793 Test: blockdev write read block ...passed 00:08:58.793 Test: blockdev write zeroes read block ...passed 00:08:58.793 Test: blockdev write zeroes read no split ...passed 00:08:58.793 Test: blockdev write zeroes read split ...passed 00:08:58.793 Test: blockdev write zeroes read split partial ...passed 00:08:58.793 Test: blockdev reset ...passed 00:08:58.793 Test: blockdev write read 8 blocks ...passed 00:08:58.793 Test: blockdev write read size > 128k ...passed 00:08:58.793 Test: blockdev write read invalid size ...passed 00:08:58.793 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:58.793 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:58.793 Test: blockdev write read max offset ...passed 00:08:58.793 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:58.793 Test: blockdev writev readv 8 blocks ...passed 00:08:58.793 Test: blockdev writev readv 30 x 1block ...passed 00:08:58.793 Test: blockdev writev readv block ...passed 00:08:58.793 Test: blockdev writev readv size > 128k ...passed 00:08:58.793 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:58.793 Test: blockdev comparev and writev ...passed 00:08:58.793 Test: blockdev nvme passthru rw ...passed 00:08:58.793 Test: blockdev nvme passthru vendor specific ...passed 00:08:58.793 Test: blockdev nvme admin passthru ...passed 00:08:58.793 Test: blockdev copy ...passed 00:08:58.793 Suite: bdevio tests on: raid1 00:08:58.793 Test: blockdev write read block ...passed 00:08:58.793 Test: blockdev write zeroes read block ...passed 00:08:58.793 Test: blockdev write zeroes read no split ...passed 00:08:58.793 Test: blockdev write zeroes read split ...passed 00:08:58.793 Test: blockdev write zeroes read split partial ...passed 00:08:58.793 Test: blockdev reset ...passed 00:08:58.793 Test: blockdev write read 8 blocks ...passed 00:08:58.793 Test: blockdev write read size > 128k ...passed 00:08:58.793 Test: blockdev write read invalid size ...passed 00:08:58.793 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:58.793 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:58.793 Test: blockdev write read max offset ...passed 00:08:58.793 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:58.793 Test: blockdev writev readv 8 blocks ...passed 00:08:58.793 Test: blockdev writev readv 30 x 1block ...passed 00:08:58.793 Test: blockdev writev readv block ...passed 00:08:58.793 Test: blockdev writev readv size > 128k ...passed 00:08:58.793 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:58.793 Test: blockdev comparev and writev ...passed 00:08:58.793 Test: blockdev nvme passthru rw ...passed 00:08:58.793 Test: blockdev nvme passthru vendor specific ...passed 00:08:58.793 Test: blockdev nvme admin passthru ...passed 00:08:58.793 Test: blockdev copy ...passed 00:08:58.793 Suite: bdevio tests on: concat0 00:08:58.793 Test: blockdev write read block ...passed 00:08:58.793 Test: blockdev write zeroes read block ...passed 00:08:58.793 Test: blockdev write zeroes read no split ...passed 00:08:58.793 Test: blockdev write zeroes read split ...passed 00:08:58.793 Test: blockdev write zeroes read split partial ...passed 00:08:58.793 Test: blockdev reset ...passed 00:08:58.793 Test: blockdev write read 8 blocks ...passed 00:08:58.793 Test: blockdev write read size > 128k ...passed 00:08:58.793 Test: blockdev write read invalid size ...passed 00:08:58.793 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:58.793 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:58.793 Test: blockdev write read max offset ...passed 00:08:58.793 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:58.793 Test: blockdev writev readv 8 blocks ...passed 00:08:58.793 Test: blockdev writev readv 30 x 1block ...passed 00:08:58.793 Test: blockdev writev readv block ...passed 00:08:58.793 Test: blockdev writev readv size > 128k ...passed 00:08:58.793 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:58.793 Test: blockdev comparev and writev ...passed 00:08:58.793 Test: blockdev nvme passthru rw ...passed 00:08:58.793 Test: blockdev nvme passthru vendor specific ...passed 00:08:58.793 Test: blockdev nvme admin passthru ...passed 00:08:58.793 Test: blockdev copy ...passed 00:08:58.793 Suite: bdevio tests on: raid0 00:08:58.793 Test: blockdev write read block ...passed 00:08:58.793 Test: blockdev write zeroes read block ...passed 00:08:58.793 Test: blockdev write zeroes read no split ...passed 00:08:58.793 Test: blockdev write zeroes read split ...passed 00:08:58.793 Test: blockdev write zeroes read split partial ...passed 00:08:58.793 Test: blockdev reset ...passed 00:08:58.793 Test: blockdev write read 8 blocks ...passed 00:08:58.793 Test: blockdev write read size > 128k ...passed 00:08:58.793 Test: blockdev write read invalid size ...passed 00:08:58.793 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:58.793 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:58.793 Test: blockdev write read max offset ...passed 00:08:58.793 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:58.794 Test: blockdev writev readv 8 blocks ...passed 00:08:58.794 Test: blockdev writev readv 30 x 1block ...passed 00:08:58.794 Test: blockdev writev readv block ...passed 00:08:58.794 Test: blockdev writev readv size > 128k ...passed 00:08:58.794 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:58.794 Test: blockdev comparev and writev ...passed 00:08:58.794 Test: blockdev nvme passthru rw ...passed 00:08:58.794 Test: blockdev nvme passthru vendor specific ...passed 00:08:58.794 Test: blockdev nvme admin passthru ...passed 00:08:58.794 Test: blockdev copy ...passed 00:08:58.794 Suite: bdevio tests on: TestPT 00:08:58.794 Test: blockdev write read block ...passed 00:08:58.794 Test: blockdev write zeroes read block ...passed 00:08:58.794 Test: blockdev write zeroes read no split ...passed 00:08:59.053 Test: blockdev write zeroes read split ...passed 00:08:59.053 Test: blockdev write zeroes read split partial ...passed 00:08:59.053 Test: blockdev reset ...passed 00:08:59.053 Test: blockdev write read 8 blocks ...passed 00:08:59.053 Test: blockdev write read size > 128k ...passed 00:08:59.053 Test: blockdev write read invalid size ...passed 00:08:59.053 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:59.053 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:59.053 Test: blockdev write read max offset ...passed 00:08:59.053 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:59.053 Test: blockdev writev readv 8 blocks ...passed 00:08:59.053 Test: blockdev writev readv 30 x 1block ...passed 00:08:59.053 Test: blockdev writev readv block ...passed 00:08:59.053 Test: blockdev writev readv size > 128k ...passed 00:08:59.053 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:59.053 Test: blockdev comparev and writev ...passed 00:08:59.053 Test: blockdev nvme passthru rw ...passed 00:08:59.053 Test: blockdev nvme passthru vendor specific ...passed 00:08:59.053 Test: blockdev nvme admin passthru ...passed 00:08:59.053 Test: blockdev copy ...passed 00:08:59.053 Suite: bdevio tests on: Malloc2p7 00:08:59.053 Test: blockdev write read block ...passed 00:08:59.053 Test: blockdev write zeroes read block ...passed 00:08:59.053 Test: blockdev write zeroes read no split ...passed 00:08:59.053 Test: blockdev write zeroes read split ...passed 00:08:59.053 Test: blockdev write zeroes read split partial ...passed 00:08:59.053 Test: blockdev reset ...passed 00:08:59.053 Test: blockdev write read 8 blocks ...passed 00:08:59.053 Test: blockdev write read size > 128k ...passed 00:08:59.053 Test: blockdev write read invalid size ...passed 00:08:59.053 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:59.053 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:59.053 Test: blockdev write read max offset ...passed 00:08:59.053 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:59.053 Test: blockdev writev readv 8 blocks ...passed 00:08:59.053 Test: blockdev writev readv 30 x 1block ...passed 00:08:59.053 Test: blockdev writev readv block ...passed 00:08:59.053 Test: blockdev writev readv size > 128k ...passed 00:08:59.053 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:59.053 Test: blockdev comparev and writev ...passed 00:08:59.053 Test: blockdev nvme passthru rw ...passed 00:08:59.053 Test: blockdev nvme passthru vendor specific ...passed 00:08:59.053 Test: blockdev nvme admin passthru ...passed 00:08:59.053 Test: blockdev copy ...passed 00:08:59.053 Suite: bdevio tests on: Malloc2p6 00:08:59.053 Test: blockdev write read block ...passed 00:08:59.053 Test: blockdev write zeroes read block ...passed 00:08:59.053 Test: blockdev write zeroes read no split ...passed 00:08:59.053 Test: blockdev write zeroes read split ...passed 00:08:59.053 Test: blockdev write zeroes read split partial ...passed 00:08:59.053 Test: blockdev reset ...passed 00:08:59.053 Test: blockdev write read 8 blocks ...passed 00:08:59.053 Test: blockdev write read size > 128k ...passed 00:08:59.053 Test: blockdev write read invalid size ...passed 00:08:59.053 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:59.053 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:59.053 Test: blockdev write read max offset ...passed 00:08:59.053 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:59.053 Test: blockdev writev readv 8 blocks ...passed 00:08:59.054 Test: blockdev writev readv 30 x 1block ...passed 00:08:59.054 Test: blockdev writev readv block ...passed 00:08:59.054 Test: blockdev writev readv size > 128k ...passed 00:08:59.054 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:59.054 Test: blockdev comparev and writev ...passed 00:08:59.054 Test: blockdev nvme passthru rw ...passed 00:08:59.054 Test: blockdev nvme passthru vendor specific ...passed 00:08:59.054 Test: blockdev nvme admin passthru ...passed 00:08:59.054 Test: blockdev copy ...passed 00:08:59.054 Suite: bdevio tests on: Malloc2p5 00:08:59.054 Test: blockdev write read block ...passed 00:08:59.054 Test: blockdev write zeroes read block ...passed 00:08:59.054 Test: blockdev write zeroes read no split ...passed 00:08:59.054 Test: blockdev write zeroes read split ...passed 00:08:59.054 Test: blockdev write zeroes read split partial ...passed 00:08:59.054 Test: blockdev reset ...passed 00:08:59.054 Test: blockdev write read 8 blocks ...passed 00:08:59.054 Test: blockdev write read size > 128k ...passed 00:08:59.054 Test: blockdev write read invalid size ...passed 00:08:59.054 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:59.054 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:59.054 Test: blockdev write read max offset ...passed 00:08:59.054 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:59.054 Test: blockdev writev readv 8 blocks ...passed 00:08:59.054 Test: blockdev writev readv 30 x 1block ...passed 00:08:59.054 Test: blockdev writev readv block ...passed 00:08:59.054 Test: blockdev writev readv size > 128k ...passed 00:08:59.054 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:59.054 Test: blockdev comparev and writev ...passed 00:08:59.054 Test: blockdev nvme passthru rw ...passed 00:08:59.054 Test: blockdev nvme passthru vendor specific ...passed 00:08:59.054 Test: blockdev nvme admin passthru ...passed 00:08:59.054 Test: blockdev copy ...passed 00:08:59.054 Suite: bdevio tests on: Malloc2p4 00:08:59.054 Test: blockdev write read block ...passed 00:08:59.054 Test: blockdev write zeroes read block ...passed 00:08:59.054 Test: blockdev write zeroes read no split ...passed 00:08:59.054 Test: blockdev write zeroes read split ...passed 00:08:59.054 Test: blockdev write zeroes read split partial ...passed 00:08:59.054 Test: blockdev reset ...passed 00:08:59.054 Test: blockdev write read 8 blocks ...passed 00:08:59.054 Test: blockdev write read size > 128k ...passed 00:08:59.054 Test: blockdev write read invalid size ...passed 00:08:59.054 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:59.054 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:59.054 Test: blockdev write read max offset ...passed 00:08:59.054 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:59.054 Test: blockdev writev readv 8 blocks ...passed 00:08:59.054 Test: blockdev writev readv 30 x 1block ...passed 00:08:59.054 Test: blockdev writev readv block ...passed 00:08:59.054 Test: blockdev writev readv size > 128k ...passed 00:08:59.054 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:59.054 Test: blockdev comparev and writev ...passed 00:08:59.054 Test: blockdev nvme passthru rw ...passed 00:08:59.054 Test: blockdev nvme passthru vendor specific ...passed 00:08:59.054 Test: blockdev nvme admin passthru ...passed 00:08:59.054 Test: blockdev copy ...passed 00:08:59.054 Suite: bdevio tests on: Malloc2p3 00:08:59.054 Test: blockdev write read block ...passed 00:08:59.054 Test: blockdev write zeroes read block ...passed 00:08:59.054 Test: blockdev write zeroes read no split ...passed 00:08:59.054 Test: blockdev write zeroes read split ...passed 00:08:59.054 Test: blockdev write zeroes read split partial ...passed 00:08:59.054 Test: blockdev reset ...passed 00:08:59.054 Test: blockdev write read 8 blocks ...passed 00:08:59.054 Test: blockdev write read size > 128k ...passed 00:08:59.054 Test: blockdev write read invalid size ...passed 00:08:59.054 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:59.054 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:59.054 Test: blockdev write read max offset ...passed 00:08:59.054 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:59.054 Test: blockdev writev readv 8 blocks ...passed 00:08:59.054 Test: blockdev writev readv 30 x 1block ...passed 00:08:59.054 Test: blockdev writev readv block ...passed 00:08:59.054 Test: blockdev writev readv size > 128k ...passed 00:08:59.054 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:59.054 Test: blockdev comparev and writev ...passed 00:08:59.054 Test: blockdev nvme passthru rw ...passed 00:08:59.054 Test: blockdev nvme passthru vendor specific ...passed 00:08:59.054 Test: blockdev nvme admin passthru ...passed 00:08:59.054 Test: blockdev copy ...passed 00:08:59.054 Suite: bdevio tests on: Malloc2p2 00:08:59.054 Test: blockdev write read block ...passed 00:08:59.054 Test: blockdev write zeroes read block ...passed 00:08:59.054 Test: blockdev write zeroes read no split ...passed 00:08:59.054 Test: blockdev write zeroes read split ...passed 00:08:59.054 Test: blockdev write zeroes read split partial ...passed 00:08:59.054 Test: blockdev reset ...passed 00:08:59.054 Test: blockdev write read 8 blocks ...passed 00:08:59.054 Test: blockdev write read size > 128k ...passed 00:08:59.054 Test: blockdev write read invalid size ...passed 00:08:59.054 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:59.054 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:59.054 Test: blockdev write read max offset ...passed 00:08:59.054 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:59.054 Test: blockdev writev readv 8 blocks ...passed 00:08:59.054 Test: blockdev writev readv 30 x 1block ...passed 00:08:59.054 Test: blockdev writev readv block ...passed 00:08:59.054 Test: blockdev writev readv size > 128k ...passed 00:08:59.054 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:59.054 Test: blockdev comparev and writev ...passed 00:08:59.054 Test: blockdev nvme passthru rw ...passed 00:08:59.054 Test: blockdev nvme passthru vendor specific ...passed 00:08:59.054 Test: blockdev nvme admin passthru ...passed 00:08:59.054 Test: blockdev copy ...passed 00:08:59.054 Suite: bdevio tests on: Malloc2p1 00:08:59.054 Test: blockdev write read block ...passed 00:08:59.054 Test: blockdev write zeroes read block ...passed 00:08:59.054 Test: blockdev write zeroes read no split ...passed 00:08:59.054 Test: blockdev write zeroes read split ...passed 00:08:59.054 Test: blockdev write zeroes read split partial ...passed 00:08:59.054 Test: blockdev reset ...passed 00:08:59.054 Test: blockdev write read 8 blocks ...passed 00:08:59.054 Test: blockdev write read size > 128k ...passed 00:08:59.054 Test: blockdev write read invalid size ...passed 00:08:59.054 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:59.054 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:59.054 Test: blockdev write read max offset ...passed 00:08:59.054 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:59.054 Test: blockdev writev readv 8 blocks ...passed 00:08:59.054 Test: blockdev writev readv 30 x 1block ...passed 00:08:59.054 Test: blockdev writev readv block ...passed 00:08:59.054 Test: blockdev writev readv size > 128k ...passed 00:08:59.054 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:59.054 Test: blockdev comparev and writev ...passed 00:08:59.054 Test: blockdev nvme passthru rw ...passed 00:08:59.054 Test: blockdev nvme passthru vendor specific ...passed 00:08:59.054 Test: blockdev nvme admin passthru ...passed 00:08:59.054 Test: blockdev copy ...passed 00:08:59.054 Suite: bdevio tests on: Malloc2p0 00:08:59.054 Test: blockdev write read block ...passed 00:08:59.054 Test: blockdev write zeroes read block ...passed 00:08:59.054 Test: blockdev write zeroes read no split ...passed 00:08:59.054 Test: blockdev write zeroes read split ...passed 00:08:59.054 Test: blockdev write zeroes read split partial ...passed 00:08:59.054 Test: blockdev reset ...passed 00:08:59.054 Test: blockdev write read 8 blocks ...passed 00:08:59.054 Test: blockdev write read size > 128k ...passed 00:08:59.054 Test: blockdev write read invalid size ...passed 00:08:59.054 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:59.054 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:59.054 Test: blockdev write read max offset ...passed 00:08:59.054 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:59.054 Test: blockdev writev readv 8 blocks ...passed 00:08:59.054 Test: blockdev writev readv 30 x 1block ...passed 00:08:59.054 Test: blockdev writev readv block ...passed 00:08:59.054 Test: blockdev writev readv size > 128k ...passed 00:08:59.054 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:59.054 Test: blockdev comparev and writev ...passed 00:08:59.054 Test: blockdev nvme passthru rw ...passed 00:08:59.054 Test: blockdev nvme passthru vendor specific ...passed 00:08:59.054 Test: blockdev nvme admin passthru ...passed 00:08:59.054 Test: blockdev copy ...passed 00:08:59.054 Suite: bdevio tests on: Malloc1p1 00:08:59.054 Test: blockdev write read block ...passed 00:08:59.054 Test: blockdev write zeroes read block ...passed 00:08:59.054 Test: blockdev write zeroes read no split ...passed 00:08:59.054 Test: blockdev write zeroes read split ...passed 00:08:59.054 Test: blockdev write zeroes read split partial ...passed 00:08:59.054 Test: blockdev reset ...passed 00:08:59.054 Test: blockdev write read 8 blocks ...passed 00:08:59.054 Test: blockdev write read size > 128k ...passed 00:08:59.054 Test: blockdev write read invalid size ...passed 00:08:59.054 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:59.054 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:59.054 Test: blockdev write read max offset ...passed 00:08:59.054 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:59.054 Test: blockdev writev readv 8 blocks ...passed 00:08:59.054 Test: blockdev writev readv 30 x 1block ...passed 00:08:59.054 Test: blockdev writev readv block ...passed 00:08:59.054 Test: blockdev writev readv size > 128k ...passed 00:08:59.054 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:59.054 Test: blockdev comparev and writev ...passed 00:08:59.054 Test: blockdev nvme passthru rw ...passed 00:08:59.054 Test: blockdev nvme passthru vendor specific ...passed 00:08:59.054 Test: blockdev nvme admin passthru ...passed 00:08:59.054 Test: blockdev copy ...passed 00:08:59.054 Suite: bdevio tests on: Malloc1p0 00:08:59.054 Test: blockdev write read block ...passed 00:08:59.055 Test: blockdev write zeroes read block ...passed 00:08:59.055 Test: blockdev write zeroes read no split ...passed 00:08:59.055 Test: blockdev write zeroes read split ...passed 00:08:59.055 Test: blockdev write zeroes read split partial ...passed 00:08:59.055 Test: blockdev reset ...passed 00:08:59.055 Test: blockdev write read 8 blocks ...passed 00:08:59.055 Test: blockdev write read size > 128k ...passed 00:08:59.055 Test: blockdev write read invalid size ...passed 00:08:59.055 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:59.055 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:59.055 Test: blockdev write read max offset ...passed 00:08:59.055 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:59.055 Test: blockdev writev readv 8 blocks ...passed 00:08:59.055 Test: blockdev writev readv 30 x 1block ...passed 00:08:59.055 Test: blockdev writev readv block ...passed 00:08:59.055 Test: blockdev writev readv size > 128k ...passed 00:08:59.055 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:59.055 Test: blockdev comparev and writev ...passed 00:08:59.055 Test: blockdev nvme passthru rw ...passed 00:08:59.055 Test: blockdev nvme passthru vendor specific ...passed 00:08:59.055 Test: blockdev nvme admin passthru ...passed 00:08:59.055 Test: blockdev copy ...passed 00:08:59.055 Suite: bdevio tests on: Malloc0 00:08:59.055 Test: blockdev write read block ...passed 00:08:59.055 Test: blockdev write zeroes read block ...passed 00:08:59.055 Test: blockdev write zeroes read no split ...passed 00:08:59.055 Test: blockdev write zeroes read split ...passed 00:08:59.055 Test: blockdev write zeroes read split partial ...passed 00:08:59.055 Test: blockdev reset ...passed 00:08:59.055 Test: blockdev write read 8 blocks ...passed 00:08:59.055 Test: blockdev write read size > 128k ...passed 00:08:59.055 Test: blockdev write read invalid size ...passed 00:08:59.055 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:59.055 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:59.055 Test: blockdev write read max offset ...passed 00:08:59.055 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:59.055 Test: blockdev writev readv 8 blocks ...passed 00:08:59.055 Test: blockdev writev readv 30 x 1block ...passed 00:08:59.055 Test: blockdev writev readv block ...passed 00:08:59.055 Test: blockdev writev readv size > 128k ...passed 00:08:59.055 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:59.055 Test: blockdev comparev and writev ...passed 00:08:59.055 Test: blockdev nvme passthru rw ...passed 00:08:59.055 Test: blockdev nvme passthru vendor specific ...passed 00:08:59.055 Test: blockdev nvme admin passthru ...passed 00:08:59.055 Test: blockdev copy ...passed 00:08:59.055 00:08:59.055 Run Summary: Type Total Ran Passed Failed Inactive 00:08:59.055 suites 16 16 n/a 0 0 00:08:59.055 tests 368 368 368 0 0 00:08:59.055 asserts 2224 2224 2224 0 n/a 00:08:59.055 00:08:59.055 Elapsed time = 0.670 seconds 00:08:59.055 0 00:08:59.055 07:55:04 -- bdev/blockdev.sh@293 -- # killprocess 55788 00:08:59.055 07:55:04 -- common/autotest_common.sh@926 -- # '[' -z 55788 ']' 00:08:59.055 07:55:04 -- common/autotest_common.sh@930 -- # kill -0 55788 00:08:59.055 07:55:04 -- common/autotest_common.sh@931 -- # uname 00:08:59.055 07:55:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:59.055 07:55:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55788 00:08:59.055 killing process with pid 55788 00:08:59.055 07:55:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:59.055 07:55:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:59.055 07:55:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55788' 00:08:59.055 07:55:04 -- common/autotest_common.sh@945 -- # kill 55788 00:08:59.055 07:55:04 -- common/autotest_common.sh@950 -- # wait 55788 00:08:59.312 07:55:05 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:08:59.312 00:08:59.312 real 0m1.672s 00:08:59.312 user 0m3.915s 00:08:59.312 sys 0m0.397s 00:08:59.312 07:55:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.312 07:55:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.312 ************************************ 00:08:59.312 END TEST bdev_bounds 00:08:59.312 ************************************ 00:08:59.570 07:55:05 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:08:59.570 07:55:05 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:59.570 07:55:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:59.570 07:55:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.570 ************************************ 00:08:59.570 START TEST bdev_nbd 00:08:59.570 ************************************ 00:08:59.570 07:55:05 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:08:59.570 07:55:05 -- bdev/blockdev.sh@298 -- # uname -s 00:08:59.570 07:55:05 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:08:59.570 07:55:05 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.570 07:55:05 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:59.570 07:55:05 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:08:59.570 07:55:05 -- bdev/blockdev.sh@302 -- # local bdev_all 00:08:59.570 07:55:05 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:08:59.570 07:55:05 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:08:59.570 07:55:05 -- bdev/blockdev.sh@307 -- # modprobe -q nbd nbds_max=16 00:08:59.570 ************************************ 00:08:59.570 END TEST bdev_nbd 00:08:59.570 ************************************ 00:08:59.570 07:55:05 -- bdev/blockdev.sh@307 -- # return 0 00:08:59.570 00:08:59.570 real 0m0.009s 00:08:59.570 user 0m0.001s 00:08:59.570 sys 0m0.008s 00:08:59.570 07:55:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.570 07:55:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.570 07:55:05 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:08:59.570 07:55:05 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:08:59.570 07:55:05 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:08:59.570 07:55:05 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:08:59.570 07:55:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:59.570 07:55:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:59.570 07:55:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.570 ************************************ 00:08:59.570 START TEST bdev_fio 00:08:59.570 ************************************ 00:08:59.570 07:55:05 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:08:59.570 07:55:05 -- bdev/blockdev.sh@329 -- # local env_context 00:08:59.570 07:55:05 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:08:59.570 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:08:59.570 07:55:05 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:08:59.570 07:55:05 -- bdev/blockdev.sh@337 -- # echo '' 00:08:59.570 07:55:05 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:08:59.570 07:55:05 -- bdev/blockdev.sh@337 -- # env_context= 00:08:59.570 07:55:05 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:08:59.570 07:55:05 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:08:59.570 07:55:05 -- common/autotest_common.sh@1260 -- # local workload=verify 00:08:59.570 07:55:05 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:08:59.570 07:55:05 -- common/autotest_common.sh@1262 -- # local env_context= 00:08:59.570 07:55:05 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:08:59.570 07:55:05 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:08:59.570 07:55:05 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:08:59.570 07:55:05 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:08:59.570 07:55:05 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:08:59.570 07:55:05 -- common/autotest_common.sh@1280 -- # cat 00:08:59.570 07:55:05 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:08:59.570 07:55:05 -- common/autotest_common.sh@1293 -- # cat 00:08:59.570 07:55:05 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:08:59.571 07:55:05 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:08:59.828 07:55:05 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:08:59.828 07:55:05 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:08:59.828 07:55:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:59.828 07:55:05 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:08:59.828 07:55:05 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:08:59.828 07:55:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:59.828 07:55:05 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:08:59.828 07:55:05 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:08:59.828 07:55:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:59.828 07:55:05 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:08:59.828 07:55:05 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:08:59.828 07:55:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:59.828 07:55:05 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:08:59.828 07:55:05 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:08:59.828 07:55:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:59.828 07:55:05 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:08:59.828 07:55:05 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:08:59.828 07:55:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:59.828 07:55:05 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:08:59.828 07:55:05 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:08:59.828 07:55:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:59.828 07:55:05 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:08:59.828 07:55:05 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:08:59.828 07:55:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:59.828 07:55:05 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:08:59.828 07:55:05 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:08:59.828 07:55:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:59.828 07:55:05 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:08:59.828 07:55:05 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:08:59.828 07:55:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:59.828 07:55:05 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:08:59.828 07:55:05 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:08:59.828 07:55:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:59.828 07:55:05 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:08:59.828 07:55:05 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:08:59.828 07:55:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:59.828 07:55:05 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:08:59.828 07:55:05 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:08:59.828 07:55:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:59.828 07:55:05 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:08:59.828 07:55:05 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:08:59.828 07:55:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:59.828 07:55:05 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:08:59.828 07:55:05 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:08:59.828 07:55:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:59.828 07:55:05 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:08:59.828 07:55:05 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:08:59.828 07:55:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:59.828 07:55:05 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:08:59.828 07:55:05 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:08:59.828 07:55:05 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:08:59.828 07:55:05 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:08:59.828 07:55:05 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:08:59.828 07:55:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:59.828 07:55:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.828 ************************************ 00:08:59.828 START TEST bdev_fio_rw_verify 00:08:59.828 ************************************ 00:08:59.828 07:55:05 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:08:59.828 07:55:05 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:08:59.828 07:55:05 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:08:59.828 07:55:05 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:08:59.828 07:55:05 -- common/autotest_common.sh@1318 -- # local sanitizers 00:08:59.828 07:55:05 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:08:59.828 07:55:05 -- common/autotest_common.sh@1320 -- # shift 00:08:59.828 07:55:05 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:08:59.828 07:55:05 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:08:59.828 07:55:05 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:08:59.828 07:55:05 -- common/autotest_common.sh@1324 -- # grep libasan 00:08:59.828 07:55:05 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:08:59.828 07:55:05 -- common/autotest_common.sh@1324 -- # asan_lib=/lib64/libasan.so.6 00:08:59.828 07:55:05 -- common/autotest_common.sh@1325 -- # [[ -n /lib64/libasan.so.6 ]] 00:08:59.828 07:55:05 -- common/autotest_common.sh@1326 -- # break 00:08:59.828 07:55:05 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib64/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:08:59.828 07:55:05 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:09:00.085 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:00.085 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:00.085 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:00.085 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:00.085 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:00.085 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:00.085 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:00.085 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:00.085 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:00.085 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:00.085 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:00.085 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:00.085 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:00.085 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:00.085 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:00.085 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:00.085 fio-3.35 00:09:00.085 Starting 16 threads 00:09:12.302 00:09:12.302 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=55911: Sat Jul 13 07:55:16 2024 00:09:12.302 read: IOPS=113k, BW=440MiB/s (461MB/s)(4410MiB/10035msec) 00:09:12.302 slat (nsec): min=770, max=71712k, avg=10857.22, stdev=198735.47 00:09:12.302 clat (usec): min=3, max=74407, avg=121.93, stdev=673.53 00:09:12.302 lat (usec): min=8, max=74421, avg=132.79, stdev=702.07 00:09:12.302 clat percentiles (usec): 00:09:12.302 | 50.000th=[ 73], 99.000th=[ 734], 99.900th=[10683], 99.990th=[23462], 00:09:12.302 | 99.999th=[59507] 00:09:12.302 write: IOPS=179k, BW=700MiB/s (734MB/s)(7001MiB/10001msec); 0 zone resets 00:09:12.302 slat (usec): min=2, max=161465, avg=55.39, stdev=1001.24 00:09:12.302 clat (usec): min=3, max=143109, avg=285.91, stdev=1954.86 00:09:12.302 lat (usec): min=17, max=161681, avg=341.30, stdev=2197.90 00:09:12.302 clat percentiles (usec): 00:09:12.302 | 50.000th=[ 116], 99.000th=[ 5014], 99.900th=[ 28181], 00:09:12.302 | 99.990th=[ 70779], 99.999th=[120062] 00:09:12.302 bw ( KiB/s): min=490160, max=997462, per=100.00%, avg=718686.60, stdev=9309.62, samples=305 00:09:12.302 iops : min=122536, max=249361, avg=179668.29, stdev=2327.43, samples=305 00:09:12.302 lat (usec) : 4=0.01%, 10=0.01%, 20=0.40%, 50=15.78%, 100=39.16% 00:09:12.302 lat (usec) : 250=39.70%, 500=1.52%, 750=2.04%, 1000=0.37% 00:09:12.302 lat (msec) : 2=0.13%, 4=0.14%, 10=0.32%, 20=0.32%, 50=0.09% 00:09:12.302 lat (msec) : 100=0.02%, 250=0.01% 00:09:12.302 cpu : usr=53.62%, sys=1.17%, ctx=18785, majf=0, minf=134348 00:09:12.302 IO depths : 1=12.4%, 2=24.7%, 4=50.2%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:12.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.302 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.302 issued rwts: total=1129077,1792333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.302 latency : target=0, window=0, percentile=100.00%, depth=8 00:09:12.302 00:09:12.302 Run status group 0 (all jobs): 00:09:12.302 READ: bw=440MiB/s (461MB/s), 440MiB/s-440MiB/s (461MB/s-461MB/s), io=4410MiB (4625MB), run=10035-10035msec 00:09:12.302 WRITE: bw=700MiB/s (734MB/s), 700MiB/s-700MiB/s (734MB/s-734MB/s), io=7001MiB (7341MB), run=10001-10001msec 00:09:12.302 ----------------------------------------------------- 00:09:12.302 Suppressions used: 00:09:12.302 count bytes template 00:09:12.302 16 140 /usr/src/fio/parse.c 00:09:12.302 13507 1296672 /usr/src/fio/iolog.c 00:09:12.302 2 596 libcrypto.so 00:09:12.302 ----------------------------------------------------- 00:09:12.302 00:09:12.302 ************************************ 00:09:12.302 END TEST bdev_fio_rw_verify 00:09:12.302 ************************************ 00:09:12.302 00:09:12.302 real 0m11.731s 00:09:12.302 user 1m27.539s 00:09:12.302 sys 0m2.411s 00:09:12.302 07:55:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.302 07:55:17 -- common/autotest_common.sh@10 -- # set +x 00:09:12.302 07:55:17 -- bdev/blockdev.sh@348 -- # rm -f 00:09:12.302 07:55:17 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:09:12.302 07:55:17 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:09:12.302 07:55:17 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:09:12.302 07:55:17 -- common/autotest_common.sh@1260 -- # local workload=trim 00:09:12.302 07:55:17 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:09:12.302 07:55:17 -- common/autotest_common.sh@1262 -- # local env_context= 00:09:12.302 07:55:17 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:09:12.302 07:55:17 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:09:12.302 07:55:17 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:09:12.302 07:55:17 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:09:12.302 07:55:17 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:09:12.302 07:55:17 -- common/autotest_common.sh@1280 -- # cat 00:09:12.302 07:55:17 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:09:12.302 07:55:17 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:09:12.302 07:55:17 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:09:12.302 07:55:17 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:09:12.303 07:55:17 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "87fdf3d5-fa19-4193-a7a4-2acad2e3d43d"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "87fdf3d5-fa19-4193-a7a4-2acad2e3d43d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "0e04f3fd-def0-5504-942b-8f43f05a1ae8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "0e04f3fd-def0-5504-942b-8f43f05a1ae8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "0c9609ad-b46e-5003-8cc9-2e574d0d84f5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "0c9609ad-b46e-5003-8cc9-2e574d0d84f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "476adf41-dbf6-5697-821b-5eb45f618182"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "476adf41-dbf6-5697-821b-5eb45f618182",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "cd4ca145-900f-5887-8700-a6264ff5db68"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cd4ca145-900f-5887-8700-a6264ff5db68",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "c8cc2394-3b8c-51b9-a898-b9032523c3a0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c8cc2394-3b8c-51b9-a898-b9032523c3a0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "a9947bce-8eae-538d-8337-9d13ced89114"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a9947bce-8eae-538d-8337-9d13ced89114",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7d793c51-736b-501b-8a8f-2a84ceb9e410"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7d793c51-736b-501b-8a8f-2a84ceb9e410",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "8569505a-6c28-52b3-8a0d-4c0306c888d5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8569505a-6c28-52b3-8a0d-4c0306c888d5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "d3a62988-602e-5805-b604-1dce2e731ed4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d3a62988-602e-5805-b604-1dce2e731ed4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "42163303-6378-55ea-8141-5b61f912125a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "42163303-6378-55ea-8141-5b61f912125a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "927553b5-37d3-511d-b151-b1fd32e1c673"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "927553b5-37d3-511d-b151-b1fd32e1c673",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "18e42ca6-bc8c-434b-8805-7fd3e66c4c80"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "18e42ca6-bc8c-434b-8805-7fd3e66c4c80",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "18e42ca6-bc8c-434b-8805-7fd3e66c4c80",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "6c51c52c-2db0-4d26-954d-d36826815aaa",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "d2cc0385-d93f-4262-8848-efe79d22a76b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "afbb1cc5-8f13-4988-bbb1-8fcbb25cf356"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "afbb1cc5-8f13-4988-bbb1-8fcbb25cf356",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "afbb1cc5-8f13-4988-bbb1-8fcbb25cf356",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "683775de-2621-414d-93c7-57ea018e4596",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "018249c6-f325-4afa-b1a4-639bd3e4904f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "ef515ae8-e05b-4b01-95bd-7576fc8c070a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ef515ae8-e05b-4b01-95bd-7576fc8c070a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ef515ae8-e05b-4b01-95bd-7576fc8c070a",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "5d7982fa-0db7-40c1-b15d-d31fda8bdba1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "49a393e9-89a7-44f5-95ab-73f32232f52a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "4051591d-1f64-4564-ba84-b953c56723c4"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "4051591d-1f64-4564-ba84-b953c56723c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:09:12.303 07:55:17 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:09:12.303 Malloc1p0 00:09:12.303 Malloc1p1 00:09:12.303 Malloc2p0 00:09:12.303 Malloc2p1 00:09:12.303 Malloc2p2 00:09:12.303 Malloc2p3 00:09:12.303 Malloc2p4 00:09:12.303 Malloc2p5 00:09:12.303 Malloc2p6 00:09:12.303 Malloc2p7 00:09:12.303 TestPT 00:09:12.303 raid0 00:09:12.303 concat0 ]] 00:09:12.303 07:55:17 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:09:12.304 07:55:17 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "87fdf3d5-fa19-4193-a7a4-2acad2e3d43d"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "87fdf3d5-fa19-4193-a7a4-2acad2e3d43d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "0e04f3fd-def0-5504-942b-8f43f05a1ae8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "0e04f3fd-def0-5504-942b-8f43f05a1ae8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "0c9609ad-b46e-5003-8cc9-2e574d0d84f5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "0c9609ad-b46e-5003-8cc9-2e574d0d84f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "476adf41-dbf6-5697-821b-5eb45f618182"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "476adf41-dbf6-5697-821b-5eb45f618182",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "cd4ca145-900f-5887-8700-a6264ff5db68"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cd4ca145-900f-5887-8700-a6264ff5db68",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "c8cc2394-3b8c-51b9-a898-b9032523c3a0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c8cc2394-3b8c-51b9-a898-b9032523c3a0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "a9947bce-8eae-538d-8337-9d13ced89114"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a9947bce-8eae-538d-8337-9d13ced89114",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7d793c51-736b-501b-8a8f-2a84ceb9e410"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7d793c51-736b-501b-8a8f-2a84ceb9e410",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "8569505a-6c28-52b3-8a0d-4c0306c888d5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8569505a-6c28-52b3-8a0d-4c0306c888d5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "d3a62988-602e-5805-b604-1dce2e731ed4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d3a62988-602e-5805-b604-1dce2e731ed4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "42163303-6378-55ea-8141-5b61f912125a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "42163303-6378-55ea-8141-5b61f912125a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "927553b5-37d3-511d-b151-b1fd32e1c673"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "927553b5-37d3-511d-b151-b1fd32e1c673",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "18e42ca6-bc8c-434b-8805-7fd3e66c4c80"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "18e42ca6-bc8c-434b-8805-7fd3e66c4c80",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "18e42ca6-bc8c-434b-8805-7fd3e66c4c80",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "6c51c52c-2db0-4d26-954d-d36826815aaa",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "d2cc0385-d93f-4262-8848-efe79d22a76b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "afbb1cc5-8f13-4988-bbb1-8fcbb25cf356"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "afbb1cc5-8f13-4988-bbb1-8fcbb25cf356",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "afbb1cc5-8f13-4988-bbb1-8fcbb25cf356",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "683775de-2621-414d-93c7-57ea018e4596",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "018249c6-f325-4afa-b1a4-639bd3e4904f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "ef515ae8-e05b-4b01-95bd-7576fc8c070a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ef515ae8-e05b-4b01-95bd-7576fc8c070a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ef515ae8-e05b-4b01-95bd-7576fc8c070a",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "5d7982fa-0db7-40c1-b15d-d31fda8bdba1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "49a393e9-89a7-44f5-95ab-73f32232f52a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "4051591d-1f64-4564-ba84-b953c56723c4"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "4051591d-1f64-4564-ba84-b953c56723c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:09:12.304 07:55:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:12.304 07:55:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:09:12.304 07:55:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:09:12.304 07:55:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:12.304 07:55:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:09:12.304 07:55:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:09:12.304 07:55:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:12.304 07:55:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:09:12.304 07:55:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:09:12.304 07:55:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:12.304 07:55:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:09:12.304 07:55:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:09:12.304 07:55:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:12.304 07:55:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:09:12.304 07:55:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:09:12.304 07:55:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:12.304 07:55:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:09:12.304 07:55:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:09:12.304 07:55:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:12.304 07:55:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:09:12.304 07:55:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:09:12.304 07:55:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:12.304 07:55:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:09:12.304 07:55:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:09:12.304 07:55:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:12.304 07:55:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:09:12.304 07:55:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:09:12.304 07:55:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:12.304 07:55:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:09:12.304 07:55:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:09:12.304 07:55:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:12.304 07:55:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:09:12.304 07:55:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:09:12.304 07:55:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:12.304 07:55:17 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:09:12.304 07:55:17 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:09:12.304 07:55:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:12.304 07:55:17 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:09:12.304 07:55:17 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:09:12.304 07:55:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:12.304 07:55:17 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:09:12.304 07:55:17 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:09:12.304 07:55:17 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:09:12.304 07:55:17 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:09:12.304 07:55:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:12.304 07:55:17 -- common/autotest_common.sh@10 -- # set +x 00:09:12.304 ************************************ 00:09:12.304 START TEST bdev_fio_trim 00:09:12.305 ************************************ 00:09:12.305 07:55:17 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:09:12.305 07:55:17 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:09:12.305 07:55:17 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:09:12.305 07:55:17 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:09:12.305 07:55:17 -- common/autotest_common.sh@1318 -- # local sanitizers 00:09:12.305 07:55:17 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:09:12.305 07:55:17 -- common/autotest_common.sh@1320 -- # shift 00:09:12.305 07:55:17 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:09:12.305 07:55:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:09:12.305 07:55:17 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:09:12.305 07:55:17 -- common/autotest_common.sh@1324 -- # grep libasan 00:09:12.305 07:55:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:09:12.305 07:55:17 -- common/autotest_common.sh@1324 -- # asan_lib=/lib64/libasan.so.6 00:09:12.305 07:55:17 -- common/autotest_common.sh@1325 -- # [[ -n /lib64/libasan.so.6 ]] 00:09:12.305 07:55:17 -- common/autotest_common.sh@1326 -- # break 00:09:12.305 07:55:17 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib64/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:09:12.305 07:55:17 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:09:12.305 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:12.305 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:12.305 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:12.305 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:12.305 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:12.305 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:12.305 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:12.305 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:12.305 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:12.305 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:12.305 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:12.305 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:12.305 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:12.305 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:12.305 fio-3.35 00:09:12.305 Starting 14 threads 00:09:24.506 00:09:24.506 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=56107: Sat Jul 13 07:55:28 2024 00:09:24.506 write: IOPS=305k, BW=1192MiB/s (1250MB/s)(11.6GiB/10002msec); 0 zone resets 00:09:24.506 slat (nsec): min=812, max=36035k, avg=14169.82, stdev=225664.89 00:09:24.506 clat (usec): min=9, max=42250, avg=134.73, stdev=768.00 00:09:24.506 lat (usec): min=11, max=42260, avg=148.90, stdev=800.05 00:09:24.506 clat percentiles (usec): 00:09:24.506 | 50.000th=[ 78], 99.000th=[ 717], 99.900th=[13173], 99.990th=[21103], 00:09:24.506 | 99.999th=[31065] 00:09:24.506 bw ( MiB/s): min= 800, max= 1654, per=99.98%, avg=1192.24, stdev=21.14, samples=266 00:09:24.506 iops : min=204970, max=423487, avg=305210.58, stdev=5411.77, samples=266 00:09:24.506 trim: IOPS=305k, BW=1192MiB/s (1250MB/s)(11.6GiB/10002msec); 0 zone resets 00:09:24.506 slat (nsec): min=1446, max=42141k, avg=10538.61, stdev=201061.41 00:09:24.506 clat (nsec): min=1741, max=42260k, avg=118461.09, stdev=661306.23 00:09:24.506 lat (usec): min=6, max=42267, avg=129.00, stdev=691.11 00:09:24.506 clat percentiles (usec): 00:09:24.506 | 50.000th=[ 86], 99.000th=[ 151], 99.900th=[13042], 99.990th=[19006], 00:09:24.506 | 99.999th=[28181] 00:09:24.506 bw ( MiB/s): min= 800, max= 1654, per=99.98%, avg=1192.25, stdev=21.14, samples=266 00:09:24.506 iops : min=204970, max=423471, avg=305212.32, stdev=5411.90, samples=266 00:09:24.506 lat (usec) : 2=0.01%, 4=0.01%, 10=0.25%, 20=0.41%, 50=8.83% 00:09:24.506 lat (usec) : 100=63.00%, 250=25.49%, 500=1.03%, 750=0.54%, 1000=0.14% 00:09:24.506 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.26%, 50=0.01% 00:09:24.506 cpu : usr=70.88%, sys=0.01%, ctx=7505, majf=0, minf=9130 00:09:24.506 IO depths : 1=12.2%, 2=24.5%, 4=50.1%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.506 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.506 issued rwts: total=0,3053241,3053243,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.506 latency : target=0, window=0, percentile=100.00%, depth=8 00:09:24.506 00:09:24.506 Run status group 0 (all jobs): 00:09:24.506 WRITE: bw=1192MiB/s (1250MB/s), 1192MiB/s-1192MiB/s (1250MB/s-1250MB/s), io=11.6GiB (12.5GB), run=10002-10002msec 00:09:24.506 TRIM: bw=1192MiB/s (1250MB/s), 1192MiB/s-1192MiB/s (1250MB/s-1250MB/s), io=11.6GiB (12.5GB), run=10002-10002msec 00:09:24.506 ----------------------------------------------------- 00:09:24.506 Suppressions used: 00:09:24.506 count bytes template 00:09:24.506 14 129 /usr/src/fio/parse.c 00:09:24.506 2 596 libcrypto.so 00:09:24.506 ----------------------------------------------------- 00:09:24.506 00:09:24.506 ************************************ 00:09:24.506 END TEST bdev_fio_trim 00:09:24.506 ************************************ 00:09:24.506 00:09:24.506 real 0m11.389s 00:09:24.506 user 1m40.293s 00:09:24.506 sys 0m0.401s 00:09:24.506 07:55:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.506 07:55:28 -- common/autotest_common.sh@10 -- # set +x 00:09:24.506 07:55:28 -- bdev/blockdev.sh@366 -- # rm -f 00:09:24.506 07:55:28 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:09:24.506 /home/vagrant/spdk_repo/spdk 00:09:24.506 ************************************ 00:09:24.506 END TEST bdev_fio 00:09:24.506 ************************************ 00:09:24.506 07:55:28 -- bdev/blockdev.sh@368 -- # popd 00:09:24.506 07:55:28 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:09:24.506 00:09:24.506 real 0m23.496s 00:09:24.506 user 3m7.975s 00:09:24.507 sys 0m2.953s 00:09:24.507 07:55:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.507 07:55:28 -- common/autotest_common.sh@10 -- # set +x 00:09:24.507 07:55:28 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:24.507 07:55:28 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:24.507 07:55:28 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:09:24.507 07:55:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:24.507 07:55:28 -- common/autotest_common.sh@10 -- # set +x 00:09:24.507 ************************************ 00:09:24.507 START TEST bdev_verify 00:09:24.507 ************************************ 00:09:24.507 07:55:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:24.507 [2024-07-13 07:55:28.910571] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:24.507 [2024-07-13 07:55:28.910748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56280 ] 00:09:24.507 [2024-07-13 07:55:29.038842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:24.507 [2024-07-13 07:55:29.083076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.507 [2024-07-13 07:55:29.083091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.507 [2024-07-13 07:55:29.216352] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:24.507 [2024-07-13 07:55:29.216443] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:24.507 [2024-07-13 07:55:29.224316] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:24.507 [2024-07-13 07:55:29.224379] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:24.507 [2024-07-13 07:55:29.232350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:24.507 [2024-07-13 07:55:29.232383] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:24.507 [2024-07-13 07:55:29.232416] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:24.507 [2024-07-13 07:55:29.304484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:24.507 [2024-07-13 07:55:29.304566] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.507 [2024-07-13 07:55:29.304625] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002a980 00:09:24.507 [2024-07-13 07:55:29.304648] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.507 [2024-07-13 07:55:29.306501] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.507 [2024-07-13 07:55:29.306538] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:09:24.507 Running I/O for 5 seconds... 00:09:29.801 00:09:29.801 Latency(us) 00:09:29.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.801 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x0 length 0x1000 00:09:29.801 Malloc0 : 5.07 4105.95 16.04 0.00 0.00 31094.00 854.31 91375.91 00:09:29.801 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x1000 length 0x1000 00:09:29.801 Malloc0 : 5.06 4111.54 16.06 0.00 0.00 31026.22 827.00 118838.61 00:09:29.801 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x0 length 0x800 00:09:29.801 Malloc1p0 : 5.07 2784.05 10.88 0.00 0.00 45827.35 1786.64 55175.07 00:09:29.801 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x800 length 0x800 00:09:29.801 Malloc1p0 : 5.07 2804.41 10.95 0.00 0.00 45452.55 1864.66 55674.39 00:09:29.801 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x0 length 0x800 00:09:29.801 Malloc1p1 : 5.07 2783.85 10.87 0.00 0.00 45798.06 1638.40 53677.10 00:09:29.801 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x800 length 0x800 00:09:29.801 Malloc1p1 : 5.07 2804.21 10.95 0.00 0.00 45425.86 1669.61 53926.77 00:09:29.801 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x0 length 0x200 00:09:29.801 Malloc2p0 : 5.07 2783.67 10.87 0.00 0.00 45772.69 1732.02 51929.48 00:09:29.801 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x200 length 0x200 00:09:29.801 Malloc2p0 : 5.07 2804.02 10.95 0.00 0.00 45399.76 1771.03 52179.14 00:09:29.801 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x0 length 0x200 00:09:29.801 Malloc2p1 : 5.07 2783.46 10.87 0.00 0.00 45747.44 1732.02 50181.85 00:09:29.801 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x200 length 0x200 00:09:29.801 Malloc2p1 : 5.07 2803.85 10.95 0.00 0.00 45372.73 1724.22 50431.51 00:09:29.801 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x0 length 0x200 00:09:29.801 Malloc2p2 : 5.07 2783.21 10.87 0.00 0.00 45721.91 1669.61 48683.89 00:09:29.801 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x200 length 0x200 00:09:29.801 Malloc2p2 : 5.07 2803.68 10.95 0.00 0.00 45343.37 1677.41 48933.55 00:09:29.801 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x0 length 0x200 00:09:29.801 Malloc2p3 : 5.07 2783.01 10.87 0.00 0.00 45695.85 1700.82 46936.26 00:09:29.801 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x200 length 0x200 00:09:29.801 Malloc2p3 : 5.07 2803.52 10.95 0.00 0.00 45316.70 1700.82 47185.92 00:09:29.801 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x0 length 0x200 00:09:29.801 Malloc2p4 : 5.07 2782.85 10.87 0.00 0.00 45666.59 1693.01 45188.63 00:09:29.801 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x200 length 0x200 00:09:29.801 Malloc2p4 : 5.07 2803.34 10.95 0.00 0.00 45291.21 1685.21 45438.29 00:09:29.801 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x0 length 0x200 00:09:29.801 Malloc2p5 : 5.07 2782.68 10.87 0.00 0.00 45639.00 1669.61 43690.67 00:09:29.801 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x200 length 0x200 00:09:29.801 Malloc2p5 : 5.07 2817.60 11.01 0.00 0.00 45100.91 1622.80 43690.67 00:09:29.801 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x0 length 0x200 00:09:29.801 Malloc2p6 : 5.07 2782.52 10.87 0.00 0.00 45611.99 1646.20 41943.04 00:09:29.801 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x200 length 0x200 00:09:29.801 Malloc2p6 : 5.07 2817.36 11.01 0.00 0.00 45075.56 1599.39 41943.04 00:09:29.801 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x0 length 0x200 00:09:29.801 Malloc2p7 : 5.08 2782.35 10.87 0.00 0.00 45585.15 1614.99 40445.07 00:09:29.801 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x200 length 0x200 00:09:29.801 Malloc2p7 : 5.07 2817.18 11.00 0.00 0.00 45049.00 1560.38 40445.07 00:09:29.801 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x0 length 0x1000 00:09:29.801 TestPT : 5.08 2769.71 10.82 0.00 0.00 45766.51 4213.03 40445.07 00:09:29.801 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x1000 length 0x1000 00:09:29.801 TestPT : 5.07 2782.91 10.87 0.00 0.00 45564.94 4181.82 67408.46 00:09:29.801 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x0 length 0x2000 00:09:29.801 raid0 : 5.08 2781.99 10.87 0.00 0.00 45513.64 1708.62 34952.53 00:09:29.801 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x2000 length 0x2000 00:09:29.801 raid0 : 5.07 2816.85 11.00 0.00 0.00 44976.91 1685.21 34203.55 00:09:29.801 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x0 length 0x2000 00:09:29.801 concat0 : 5.08 2781.85 10.87 0.00 0.00 45485.45 1661.81 34952.53 00:09:29.801 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x2000 length 0x2000 00:09:29.801 concat0 : 5.07 2816.71 11.00 0.00 0.00 44946.75 1716.42 34453.21 00:09:29.801 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x0 length 0x1000 00:09:29.801 raid1 : 5.08 2797.65 10.93 0.00 0.00 45240.06 760.69 34952.53 00:09:29.801 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x1000 length 0x1000 00:09:29.801 raid1 : 5.08 2816.53 11.00 0.00 0.00 44918.82 1966.08 34952.53 00:09:29.801 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x0 length 0x4e2 00:09:29.801 AIO0 : 5.08 2792.23 10.91 0.00 0.00 45288.48 497.37 36450.50 00:09:29.801 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:29.801 Verification LBA range: start 0x4e2 length 0x4e2 00:09:29.801 AIO0 : 5.08 2809.52 10.97 0.00 0.00 44992.99 1802.24 36200.84 00:09:29.801 =================================================================================================================== 00:09:29.801 Total : 92094.28 359.74 0.00 0.00 44138.14 497.37 118838.61 00:09:29.801 ************************************ 00:09:29.801 END TEST bdev_verify 00:09:29.801 ************************************ 00:09:29.801 00:09:29.801 real 0m6.121s 00:09:29.801 user 0m11.068s 00:09:29.801 sys 0m0.539s 00:09:29.801 07:55:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.801 07:55:34 -- common/autotest_common.sh@10 -- # set +x 00:09:29.801 07:55:34 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:29.801 07:55:34 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:09:29.801 07:55:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:29.801 07:55:34 -- common/autotest_common.sh@10 -- # set +x 00:09:29.801 ************************************ 00:09:29.801 START TEST bdev_verify_big_io 00:09:29.801 ************************************ 00:09:29.802 07:55:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:29.802 [2024-07-13 07:55:35.088811] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:29.802 [2024-07-13 07:55:35.088978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56380 ] 00:09:29.802 [2024-07-13 07:55:35.218884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:29.802 [2024-07-13 07:55:35.262146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.802 [2024-07-13 07:55:35.262161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.802 [2024-07-13 07:55:35.395624] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:29.802 [2024-07-13 07:55:35.395693] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:29.802 [2024-07-13 07:55:35.403567] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:29.802 [2024-07-13 07:55:35.403633] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:29.802 [2024-07-13 07:55:35.411619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:29.802 [2024-07-13 07:55:35.411665] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:29.802 [2024-07-13 07:55:35.411700] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:29.802 [2024-07-13 07:55:35.483781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:29.802 [2024-07-13 07:55:35.483862] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.802 [2024-07-13 07:55:35.483924] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002a980 00:09:29.802 [2024-07-13 07:55:35.483945] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.802 [2024-07-13 07:55:35.485822] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.802 [2024-07-13 07:55:35.485858] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:09:30.060 [2024-07-13 07:55:35.622196] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:09:30.060 [2024-07-13 07:55:35.623141] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:09:30.060 [2024-07-13 07:55:35.624488] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:09:30.060 [2024-07-13 07:55:35.625681] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:09:30.060 [2024-07-13 07:55:35.626503] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:09:30.060 [2024-07-13 07:55:35.627768] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:09:30.060 [2024-07-13 07:55:35.628449] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:09:30.060 [2024-07-13 07:55:35.629694] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:09:30.060 [2024-07-13 07:55:35.630532] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:09:30.060 [2024-07-13 07:55:35.631817] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:09:30.060 [2024-07-13 07:55:35.632620] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:09:30.060 [2024-07-13 07:55:35.633911] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:09:30.060 [2024-07-13 07:55:35.635224] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:09:30.060 [2024-07-13 07:55:35.636019] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:09:30.060 [2024-07-13 07:55:35.637322] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:09:30.060 [2024-07-13 07:55:35.638243] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:09:30.060 [2024-07-13 07:55:35.661056] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:09:30.060 [2024-07-13 07:55:35.663118] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:09:30.060 Running I/O for 5 seconds... 00:09:36.621 00:09:36.621 Latency(us) 00:09:36.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.621 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x0 length 0x100 00:09:36.621 Malloc0 : 5.27 769.29 48.08 0.00 0.00 164121.72 11796.48 571224.26 00:09:36.621 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x100 length 0x100 00:09:36.621 Malloc0 : 5.27 793.16 49.57 0.00 0.00 158443.58 9050.21 579213.41 00:09:36.621 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x0 length 0x80 00:09:36.621 Malloc1p0 : 5.27 552.57 34.54 0.00 0.00 227163.49 22344.66 505313.77 00:09:36.621 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x80 length 0x80 00:09:36.621 Malloc1p0 : 5.34 383.19 23.95 0.00 0.00 323098.33 20846.69 511305.63 00:09:36.621 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x0 length 0x80 00:09:36.621 Malloc1p1 : 5.40 226.14 14.13 0.00 0.00 545334.25 21970.16 1062557.01 00:09:36.621 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x80 length 0x80 00:09:36.621 Malloc1p1 : 5.43 237.03 14.81 0.00 0.00 519252.43 21096.35 1038589.56 00:09:36.621 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x0 length 0x20 00:09:36.621 Malloc2p0 : 5.32 141.47 8.84 0.00 0.00 218324.63 3308.01 325557.88 00:09:36.621 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x20 length 0x20 00:09:36.621 Malloc2p0 : 5.32 146.05 9.13 0.00 0.00 210940.48 2980.33 321563.31 00:09:36.621 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x0 length 0x20 00:09:36.621 Malloc2p1 : 5.32 141.46 8.84 0.00 0.00 217859.01 3089.55 319566.02 00:09:36.621 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x20 length 0x20 00:09:36.621 Malloc2p1 : 5.32 146.04 9.13 0.00 0.00 210494.19 3073.95 313574.16 00:09:36.621 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x0 length 0x20 00:09:36.621 Malloc2p2 : 5.32 141.45 8.84 0.00 0.00 217480.98 3089.55 311576.87 00:09:36.621 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x20 length 0x20 00:09:36.621 Malloc2p2 : 5.32 146.03 9.13 0.00 0.00 210003.70 3838.54 303587.72 00:09:36.621 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x0 length 0x20 00:09:36.621 Malloc2p3 : 5.32 141.44 8.84 0.00 0.00 217030.59 3900.95 303587.72 00:09:36.621 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x20 length 0x20 00:09:36.621 Malloc2p3 : 5.32 146.02 9.13 0.00 0.00 209570.70 3635.69 295598.57 00:09:36.621 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x0 length 0x20 00:09:36.621 Malloc2p4 : 5.32 141.43 8.84 0.00 0.00 216538.75 3698.10 295598.57 00:09:36.621 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x20 length 0x20 00:09:36.621 Malloc2p4 : 5.32 146.01 9.13 0.00 0.00 209145.25 3073.95 289606.70 00:09:36.621 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x0 length 0x20 00:09:36.621 Malloc2p5 : 5.32 141.43 8.84 0.00 0.00 216088.93 3276.80 287609.42 00:09:36.621 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x20 length 0x20 00:09:36.621 Malloc2p5 : 5.35 149.10 9.32 0.00 0.00 205126.15 2761.87 283614.84 00:09:36.621 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x0 length 0x20 00:09:36.621 Malloc2p6 : 5.32 141.42 8.84 0.00 0.00 215650.05 3042.74 281617.55 00:09:36.621 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x20 length 0x20 00:09:36.621 Malloc2p6 : 5.35 149.09 9.32 0.00 0.00 204683.09 2980.33 275625.69 00:09:36.621 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x0 length 0x20 00:09:36.621 Malloc2p7 : 5.32 141.41 8.84 0.00 0.00 215217.45 3386.03 273628.40 00:09:36.621 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x20 length 0x20 00:09:36.621 Malloc2p7 : 5.35 149.08 9.32 0.00 0.00 204257.94 4088.20 265639.25 00:09:36.621 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x0 length 0x100 00:09:36.621 TestPT : 5.37 227.93 14.25 0.00 0.00 530928.75 27462.70 1078535.31 00:09:36.621 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x100 length 0x100 00:09:36.621 TestPT : 5.39 238.83 14.93 0.00 0.00 505673.29 22719.15 1038589.56 00:09:36.621 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x0 length 0x200 00:09:36.621 raid0 : 5.42 237.82 14.86 0.00 0.00 505179.58 20971.52 1062557.01 00:09:36.621 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:36.621 Verification LBA range: start 0x200 length 0x200 00:09:36.622 raid0 : 5.44 243.01 15.19 0.00 0.00 492253.13 22094.99 1030600.41 00:09:36.622 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:36.622 Verification LBA range: start 0x0 length 0x200 00:09:36.622 concat0 : 5.43 243.76 15.23 0.00 0.00 488774.96 18724.57 1062557.01 00:09:36.622 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:36.622 Verification LBA range: start 0x200 length 0x200 00:09:36.622 concat0 : 5.44 256.06 16.00 0.00 0.00 464830.27 18599.74 1030600.41 00:09:36.622 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:36.622 Verification LBA range: start 0x0 length 0x100 00:09:36.622 raid1 : 5.42 263.98 16.50 0.00 0.00 449495.82 10985.08 1062557.01 00:09:36.622 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:36.622 Verification LBA range: start 0x100 length 0x100 00:09:36.622 raid1 : 5.44 283.42 17.71 0.00 0.00 418138.96 9611.95 1030600.41 00:09:36.622 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:09:36.622 Verification LBA range: start 0x0 length 0x4e 00:09:36.622 AIO0 : 5.43 269.45 16.84 0.00 0.00 266711.63 651.46 603180.86 00:09:36.622 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:09:36.622 Verification LBA range: start 0x4e length 0x4e 00:09:36.622 AIO0 : 5.45 271.21 16.95 0.00 0.00 264151.10 446.66 583207.98 00:09:36.622 =================================================================================================================== 00:09:36.622 Total : 7805.75 487.86 0.00 0.00 300867.22 446.66 1078535.31 00:09:36.622 00:09:36.622 real 0m6.537s 00:09:36.622 user 0m12.051s 00:09:36.622 sys 0m0.388s 00:09:36.622 07:55:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.622 ************************************ 00:09:36.622 END TEST bdev_verify_big_io 00:09:36.622 ************************************ 00:09:36.622 07:55:41 -- common/autotest_common.sh@10 -- # set +x 00:09:36.622 07:55:41 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:36.622 07:55:41 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:36.622 07:55:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:36.622 07:55:41 -- common/autotest_common.sh@10 -- # set +x 00:09:36.622 ************************************ 00:09:36.622 START TEST bdev_write_zeroes 00:09:36.622 ************************************ 00:09:36.622 07:55:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:36.622 [2024-07-13 07:55:41.683802] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:36.622 [2024-07-13 07:55:41.683981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56488 ] 00:09:36.622 [2024-07-13 07:55:41.814426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.622 [2024-07-13 07:55:41.858952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.622 [2024-07-13 07:55:41.994666] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:36.622 [2024-07-13 07:55:41.994734] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:36.622 [2024-07-13 07:55:42.002630] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:36.622 [2024-07-13 07:55:42.002687] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:36.622 [2024-07-13 07:55:42.010669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:36.622 [2024-07-13 07:55:42.010716] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:36.622 [2024-07-13 07:55:42.010754] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:36.622 [2024-07-13 07:55:42.082201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:36.622 [2024-07-13 07:55:42.082286] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.622 [2024-07-13 07:55:42.082332] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002a980 00:09:36.622 [2024-07-13 07:55:42.082357] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.622 [2024-07-13 07:55:42.084227] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.622 [2024-07-13 07:55:42.084270] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:09:36.622 Running I/O for 1 seconds... 00:09:37.555 00:09:37.555 Latency(us) 00:09:37.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.555 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.555 Malloc0 : 1.01 17309.62 67.62 0.00 0.00 7392.28 217.48 14168.26 00:09:37.555 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.555 Malloc1p0 : 1.01 17303.08 67.59 0.00 0.00 7388.70 310.13 13668.94 00:09:37.555 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.555 Malloc1p1 : 1.01 17298.75 67.57 0.00 0.00 7384.59 306.22 13356.86 00:09:37.555 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.555 Malloc2p0 : 1.01 17294.89 67.56 0.00 0.00 7381.34 292.57 13107.20 00:09:37.555 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.555 Malloc2p1 : 1.01 17290.94 67.54 0.00 0.00 7378.92 296.47 12795.12 00:09:37.555 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.555 Malloc2p2 : 1.01 17287.36 67.53 0.00 0.00 7375.14 282.82 12545.46 00:09:37.555 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.555 Malloc2p3 : 1.01 17283.64 67.51 0.00 0.00 7371.31 298.42 12295.80 00:09:37.555 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.555 Malloc2p4 : 1.01 17280.12 67.50 0.00 0.00 7367.46 290.62 11983.73 00:09:37.555 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.555 Malloc2p5 : 1.02 17276.56 67.49 0.00 0.00 7363.77 282.82 11734.06 00:09:37.555 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.555 Malloc2p6 : 1.02 17272.85 67.47 0.00 0.00 7361.50 286.72 11421.99 00:09:37.555 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.555 Malloc2p7 : 1.02 17269.21 67.46 0.00 0.00 7357.57 298.42 11109.91 00:09:37.555 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.555 TestPT : 1.02 17265.57 67.44 0.00 0.00 7353.88 296.47 10860.25 00:09:37.555 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.555 raid0 : 1.02 17260.91 67.43 0.00 0.00 7350.07 507.12 10360.93 00:09:37.555 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.555 concat0 : 1.02 17256.36 67.41 0.00 0.00 7343.03 507.12 9861.61 00:09:37.555 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.555 raid1 : 1.02 17249.62 67.38 0.00 0.00 7336.27 858.21 8987.79 00:09:37.555 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.555 AIO0 : 1.02 17326.28 67.68 0.00 0.00 7288.86 395.95 8426.06 00:09:37.555 =================================================================================================================== 00:09:37.555 Total : 276525.75 1080.18 0.00 0.00 7362.14 217.48 14168.26 00:09:37.813 ************************************ 00:09:37.813 END TEST bdev_write_zeroes 00:09:37.813 ************************************ 00:09:37.813 00:09:37.813 real 0m2.020s 00:09:37.813 user 0m1.453s 00:09:37.813 sys 0m0.353s 00:09:37.813 07:55:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.813 07:55:43 -- common/autotest_common.sh@10 -- # set +x 00:09:37.813 07:55:43 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:37.813 07:55:43 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:37.813 07:55:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:37.813 07:55:43 -- common/autotest_common.sh@10 -- # set +x 00:09:38.070 ************************************ 00:09:38.070 START TEST bdev_json_nonenclosed 00:09:38.070 ************************************ 00:09:38.070 07:55:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:38.070 [2024-07-13 07:55:43.756042] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:38.070 [2024-07-13 07:55:43.756206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56538 ] 00:09:38.327 [2024-07-13 07:55:43.885685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.327 [2024-07-13 07:55:43.928815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.327 [2024-07-13 07:55:43.929010] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:38.327 [2024-07-13 07:55:43.929047] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:38.327 ************************************ 00:09:38.327 END TEST bdev_json_nonenclosed 00:09:38.327 ************************************ 00:09:38.327 00:09:38.327 real 0m0.387s 00:09:38.327 user 0m0.111s 00:09:38.327 sys 0m0.079s 00:09:38.327 07:55:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.327 07:55:44 -- common/autotest_common.sh@10 -- # set +x 00:09:38.327 07:55:44 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:38.327 07:55:44 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:38.327 07:55:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:38.327 07:55:44 -- common/autotest_common.sh@10 -- # set +x 00:09:38.327 ************************************ 00:09:38.327 START TEST bdev_json_nonarray 00:09:38.327 ************************************ 00:09:38.327 07:55:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:38.586 [2024-07-13 07:55:44.204822] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:38.586 [2024-07-13 07:55:44.205061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56560 ] 00:09:38.586 [2024-07-13 07:55:44.362214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.845 [2024-07-13 07:55:44.412486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.845 [2024-07-13 07:55:44.412739] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:38.845 [2024-07-13 07:55:44.412789] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:38.845 ************************************ 00:09:38.845 END TEST bdev_json_nonarray 00:09:38.845 ************************************ 00:09:38.845 00:09:38.845 real 0m0.440s 00:09:38.845 user 0m0.134s 00:09:38.845 sys 0m0.109s 00:09:38.845 07:55:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.845 07:55:44 -- common/autotest_common.sh@10 -- # set +x 00:09:38.845 07:55:44 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:09:38.845 07:55:44 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:09:38.845 07:55:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:38.845 07:55:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:38.845 07:55:44 -- common/autotest_common.sh@10 -- # set +x 00:09:38.845 ************************************ 00:09:38.845 START TEST bdev_qos 00:09:38.845 ************************************ 00:09:38.845 07:55:44 -- common/autotest_common.sh@1104 -- # qos_test_suite '' 00:09:38.845 Process qos testing pid: 56589 00:09:38.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.845 07:55:44 -- bdev/blockdev.sh@444 -- # QOS_PID=56589 00:09:38.845 07:55:44 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 56589' 00:09:38.845 07:55:44 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:09:38.845 07:55:44 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:09:38.845 07:55:44 -- bdev/blockdev.sh@447 -- # waitforlisten 56589 00:09:38.845 07:55:44 -- common/autotest_common.sh@819 -- # '[' -z 56589 ']' 00:09:38.845 07:55:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.845 07:55:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:38.845 07:55:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.845 07:55:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:38.845 07:55:44 -- common/autotest_common.sh@10 -- # set +x 00:09:39.104 [2024-07-13 07:55:44.699254] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:39.104 [2024-07-13 07:55:44.699708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56589 ] 00:09:39.104 [2024-07-13 07:55:44.852947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.104 [2024-07-13 07:55:44.902423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.039 07:55:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:40.039 07:55:45 -- common/autotest_common.sh@852 -- # return 0 00:09:40.039 07:55:45 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:09:40.039 07:55:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:40.039 07:55:45 -- common/autotest_common.sh@10 -- # set +x 00:09:40.039 Malloc_0 00:09:40.039 07:55:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:40.039 07:55:45 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:09:40.039 07:55:45 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_0 00:09:40.039 07:55:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:40.039 07:55:45 -- common/autotest_common.sh@889 -- # local i 00:09:40.039 07:55:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:40.039 07:55:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:40.039 07:55:45 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:09:40.039 07:55:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:40.039 07:55:45 -- common/autotest_common.sh@10 -- # set +x 00:09:40.039 07:55:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:40.039 07:55:45 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:09:40.039 07:55:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:40.039 07:55:45 -- common/autotest_common.sh@10 -- # set +x 00:09:40.039 [ 00:09:40.039 { 00:09:40.039 "name": "Malloc_0", 00:09:40.039 "aliases": [ 00:09:40.039 "b650207d-1519-43d8-9c49-9d926eae4dc3" 00:09:40.039 ], 00:09:40.039 "product_name": "Malloc disk", 00:09:40.039 "block_size": 512, 00:09:40.039 "num_blocks": 262144, 00:09:40.039 "uuid": "b650207d-1519-43d8-9c49-9d926eae4dc3", 00:09:40.039 "assigned_rate_limits": { 00:09:40.039 "rw_ios_per_sec": 0, 00:09:40.039 "rw_mbytes_per_sec": 0, 00:09:40.039 "r_mbytes_per_sec": 0, 00:09:40.039 "w_mbytes_per_sec": 0 00:09:40.039 }, 00:09:40.039 "claimed": false, 00:09:40.039 "zoned": false, 00:09:40.039 "supported_io_types": { 00:09:40.039 "read": true, 00:09:40.039 "write": true, 00:09:40.039 "unmap": true, 00:09:40.039 "write_zeroes": true, 00:09:40.039 "flush": true, 00:09:40.039 "reset": true, 00:09:40.039 "compare": false, 00:09:40.039 "compare_and_write": false, 00:09:40.039 "abort": true, 00:09:40.039 "nvme_admin": false, 00:09:40.039 "nvme_io": false 00:09:40.039 }, 00:09:40.039 "memory_domains": [ 00:09:40.039 { 00:09:40.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.039 "dma_device_type": 2 00:09:40.039 } 00:09:40.039 ], 00:09:40.039 "driver_specific": {} 00:09:40.039 } 00:09:40.039 ] 00:09:40.039 07:55:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:40.039 07:55:45 -- common/autotest_common.sh@895 -- # return 0 00:09:40.039 07:55:45 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:09:40.039 07:55:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:40.039 07:55:45 -- common/autotest_common.sh@10 -- # set +x 00:09:40.039 Null_1 00:09:40.039 07:55:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:40.039 07:55:45 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:09:40.039 07:55:45 -- common/autotest_common.sh@887 -- # local bdev_name=Null_1 00:09:40.039 07:55:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:40.039 07:55:45 -- common/autotest_common.sh@889 -- # local i 00:09:40.039 07:55:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:40.039 07:55:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:40.039 07:55:45 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:09:40.039 07:55:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:40.039 07:55:45 -- common/autotest_common.sh@10 -- # set +x 00:09:40.039 07:55:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:40.039 07:55:45 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:09:40.039 07:55:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:40.039 07:55:45 -- common/autotest_common.sh@10 -- # set +x 00:09:40.039 [ 00:09:40.039 { 00:09:40.039 "name": "Null_1", 00:09:40.039 "aliases": [ 00:09:40.039 "d6b42783-d260-41fa-80bd-294b12ef147b" 00:09:40.039 ], 00:09:40.039 "product_name": "Null disk", 00:09:40.039 "block_size": 512, 00:09:40.039 "num_blocks": 262144, 00:09:40.039 "uuid": "d6b42783-d260-41fa-80bd-294b12ef147b", 00:09:40.039 "assigned_rate_limits": { 00:09:40.039 "rw_ios_per_sec": 0, 00:09:40.039 "rw_mbytes_per_sec": 0, 00:09:40.039 "r_mbytes_per_sec": 0, 00:09:40.039 "w_mbytes_per_sec": 0 00:09:40.039 }, 00:09:40.039 "claimed": false, 00:09:40.039 "zoned": false, 00:09:40.039 "supported_io_types": { 00:09:40.039 "read": true, 00:09:40.039 "write": true, 00:09:40.039 "unmap": false, 00:09:40.039 "write_zeroes": true, 00:09:40.039 "flush": false, 00:09:40.039 "reset": true, 00:09:40.039 "compare": false, 00:09:40.039 "compare_and_write": false, 00:09:40.039 "abort": true, 00:09:40.039 "nvme_admin": false, 00:09:40.039 "nvme_io": false 00:09:40.039 }, 00:09:40.039 "driver_specific": {} 00:09:40.039 } 00:09:40.039 ] 00:09:40.039 07:55:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:40.039 07:55:45 -- common/autotest_common.sh@895 -- # return 0 00:09:40.039 07:55:45 -- bdev/blockdev.sh@455 -- # qos_function_test 00:09:40.039 07:55:45 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:09:40.039 07:55:45 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:09:40.039 07:55:45 -- bdev/blockdev.sh@410 -- # local io_result=0 00:09:40.039 07:55:45 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:09:40.039 07:55:45 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:09:40.039 07:55:45 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:40.039 07:55:45 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:09:40.039 07:55:45 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:09:40.039 07:55:45 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:09:40.039 07:55:45 -- bdev/blockdev.sh@375 -- # local iostat_result 00:09:40.039 07:55:45 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:40.039 07:55:45 -- bdev/blockdev.sh@376 -- # tail -1 00:09:40.039 07:55:45 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:09:40.039 Running I/O for 60 seconds... 00:09:45.307 07:55:50 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 215530.87 862123.50 0.00 0.00 872448.00 0.00 0.00 ' 00:09:45.307 07:55:50 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:09:45.307 07:55:50 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:09:45.307 07:55:50 -- bdev/blockdev.sh@378 -- # iostat_result=215530.87 00:09:45.307 07:55:50 -- bdev/blockdev.sh@383 -- # echo 215530 00:09:45.307 07:55:50 -- bdev/blockdev.sh@414 -- # io_result=215530 00:09:45.307 07:55:50 -- bdev/blockdev.sh@416 -- # iops_limit=53000 00:09:45.307 07:55:50 -- bdev/blockdev.sh@417 -- # '[' 53000 -gt 1000 ']' 00:09:45.307 07:55:50 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 53000 Malloc_0 00:09:45.307 07:55:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:45.307 07:55:50 -- common/autotest_common.sh@10 -- # set +x 00:09:45.307 07:55:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:45.307 07:55:50 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 53000 IOPS Malloc_0 00:09:45.307 07:55:50 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:45.307 07:55:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:45.307 07:55:50 -- common/autotest_common.sh@10 -- # set +x 00:09:45.307 ************************************ 00:09:45.307 START TEST bdev_qos_iops 00:09:45.307 ************************************ 00:09:45.307 07:55:50 -- common/autotest_common.sh@1104 -- # run_qos_test 53000 IOPS Malloc_0 00:09:45.307 07:55:50 -- bdev/blockdev.sh@387 -- # local qos_limit=53000 00:09:45.307 07:55:50 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:09:45.307 07:55:50 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:09:45.307 07:55:50 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:09:45.307 07:55:50 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:09:45.307 07:55:50 -- bdev/blockdev.sh@375 -- # local iostat_result 00:09:45.307 07:55:50 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:09:45.307 07:55:50 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:45.307 07:55:50 -- bdev/blockdev.sh@376 -- # tail -1 00:09:50.635 07:55:56 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 52965.03 211860.11 0.00 0.00 214120.00 0.00 0.00 ' 00:09:50.635 07:55:56 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:09:50.635 07:55:56 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:09:50.635 07:55:56 -- bdev/blockdev.sh@378 -- # iostat_result=52965.03 00:09:50.635 07:55:56 -- bdev/blockdev.sh@383 -- # echo 52965 00:09:50.636 07:55:56 -- bdev/blockdev.sh@390 -- # qos_result=52965 00:09:50.636 07:55:56 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:09:50.636 07:55:56 -- bdev/blockdev.sh@394 -- # lower_limit=47700 00:09:50.636 07:55:56 -- bdev/blockdev.sh@395 -- # upper_limit=58300 00:09:50.636 07:55:56 -- bdev/blockdev.sh@398 -- # '[' 52965 -lt 47700 ']' 00:09:50.636 07:55:56 -- bdev/blockdev.sh@398 -- # '[' 52965 -gt 58300 ']' 00:09:50.636 00:09:50.636 real 0m5.184s 00:09:50.636 user 0m0.119s 00:09:50.636 sys 0m0.032s 00:09:50.636 07:55:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.636 07:55:56 -- common/autotest_common.sh@10 -- # set +x 00:09:50.636 ************************************ 00:09:50.636 END TEST bdev_qos_iops 00:09:50.636 ************************************ 00:09:50.636 07:55:56 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:09:50.636 07:55:56 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:09:50.636 07:55:56 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:09:50.636 07:55:56 -- bdev/blockdev.sh@375 -- # local iostat_result 00:09:50.636 07:55:56 -- bdev/blockdev.sh@376 -- # grep Null_1 00:09:50.636 07:55:56 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:50.636 07:55:56 -- bdev/blockdev.sh@376 -- # tail -1 00:09:55.899 07:56:01 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 58394.82 233579.30 0.00 0.00 235520.00 0.00 0.00 ' 00:09:55.900 07:56:01 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:09:55.900 07:56:01 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:55.900 07:56:01 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:09:55.900 07:56:01 -- bdev/blockdev.sh@380 -- # iostat_result=235520.00 00:09:55.900 07:56:01 -- bdev/blockdev.sh@383 -- # echo 235520 00:09:55.900 07:56:01 -- bdev/blockdev.sh@425 -- # bw_limit=235520 00:09:55.900 07:56:01 -- bdev/blockdev.sh@426 -- # bw_limit=23 00:09:55.900 07:56:01 -- bdev/blockdev.sh@427 -- # '[' 23 -lt 2 ']' 00:09:55.900 07:56:01 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 23 Null_1 00:09:55.900 07:56:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:55.900 07:56:01 -- common/autotest_common.sh@10 -- # set +x 00:09:55.900 07:56:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:55.900 07:56:01 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 23 BANDWIDTH Null_1 00:09:55.900 07:56:01 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:55.900 07:56:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:55.900 07:56:01 -- common/autotest_common.sh@10 -- # set +x 00:09:55.900 ************************************ 00:09:55.900 START TEST bdev_qos_bw 00:09:55.900 ************************************ 00:09:55.900 07:56:01 -- common/autotest_common.sh@1104 -- # run_qos_test 23 BANDWIDTH Null_1 00:09:55.900 07:56:01 -- bdev/blockdev.sh@387 -- # local qos_limit=23 00:09:55.900 07:56:01 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:09:55.900 07:56:01 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:09:55.900 07:56:01 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:09:55.900 07:56:01 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:09:55.900 07:56:01 -- bdev/blockdev.sh@375 -- # local iostat_result 00:09:55.900 07:56:01 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:55.900 07:56:01 -- bdev/blockdev.sh@376 -- # grep Null_1 00:09:55.900 07:56:01 -- bdev/blockdev.sh@376 -- # tail -1 00:10:01.164 07:56:06 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 5888.55 23554.21 0.00 0.00 23788.00 0.00 0.00 ' 00:10:01.164 07:56:06 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:10:01.164 07:56:06 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:10:01.164 07:56:06 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:10:01.164 07:56:06 -- bdev/blockdev.sh@380 -- # iostat_result=23788.00 00:10:01.164 07:56:06 -- bdev/blockdev.sh@383 -- # echo 23788 00:10:01.164 ************************************ 00:10:01.164 END TEST bdev_qos_bw 00:10:01.164 ************************************ 00:10:01.164 07:56:06 -- bdev/blockdev.sh@390 -- # qos_result=23788 00:10:01.164 07:56:06 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:10:01.164 07:56:06 -- bdev/blockdev.sh@392 -- # qos_limit=23552 00:10:01.164 07:56:06 -- bdev/blockdev.sh@394 -- # lower_limit=21196 00:10:01.164 07:56:06 -- bdev/blockdev.sh@395 -- # upper_limit=25907 00:10:01.164 07:56:06 -- bdev/blockdev.sh@398 -- # '[' 23788 -lt 21196 ']' 00:10:01.164 07:56:06 -- bdev/blockdev.sh@398 -- # '[' 23788 -gt 25907 ']' 00:10:01.164 00:10:01.164 real 0m5.176s 00:10:01.164 user 0m0.105s 00:10:01.164 sys 0m0.028s 00:10:01.164 07:56:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.164 07:56:06 -- common/autotest_common.sh@10 -- # set +x 00:10:01.164 07:56:06 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:10:01.164 07:56:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:01.164 07:56:06 -- common/autotest_common.sh@10 -- # set +x 00:10:01.164 07:56:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:01.164 07:56:06 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:10:01.164 07:56:06 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:01.164 07:56:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:01.164 07:56:06 -- common/autotest_common.sh@10 -- # set +x 00:10:01.164 ************************************ 00:10:01.164 START TEST bdev_qos_ro_bw 00:10:01.164 ************************************ 00:10:01.164 07:56:06 -- common/autotest_common.sh@1104 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:10:01.164 07:56:06 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:10:01.164 07:56:06 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:10:01.164 07:56:06 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:10:01.164 07:56:06 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:10:01.164 07:56:06 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:10:01.164 07:56:06 -- bdev/blockdev.sh@375 -- # local iostat_result 00:10:01.164 07:56:06 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:10:01.164 07:56:06 -- bdev/blockdev.sh@376 -- # tail -1 00:10:01.164 07:56:06 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:10:06.493 07:56:11 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 511.45 2045.81 0.00 0.00 2064.00 0.00 0.00 ' 00:10:06.493 07:56:11 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:10:06.493 07:56:11 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:10:06.493 07:56:11 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:10:06.493 07:56:11 -- bdev/blockdev.sh@380 -- # iostat_result=2064.00 00:10:06.493 07:56:11 -- bdev/blockdev.sh@383 -- # echo 2064 00:10:06.493 07:56:11 -- bdev/blockdev.sh@390 -- # qos_result=2064 00:10:06.493 ************************************ 00:10:06.493 END TEST bdev_qos_ro_bw 00:10:06.493 ************************************ 00:10:06.493 07:56:11 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:10:06.493 07:56:11 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:10:06.493 07:56:11 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:10:06.493 07:56:11 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:10:06.493 07:56:11 -- bdev/blockdev.sh@398 -- # '[' 2064 -lt 1843 ']' 00:10:06.493 07:56:11 -- bdev/blockdev.sh@398 -- # '[' 2064 -gt 2252 ']' 00:10:06.493 00:10:06.493 real 0m5.179s 00:10:06.493 user 0m0.120s 00:10:06.493 sys 0m0.028s 00:10:06.493 07:56:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.493 07:56:11 -- common/autotest_common.sh@10 -- # set +x 00:10:06.493 07:56:11 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:10:06.493 07:56:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:06.493 07:56:11 -- common/autotest_common.sh@10 -- # set +x 00:10:06.752 07:56:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:06.752 07:56:12 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:10:06.752 07:56:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:06.752 07:56:12 -- common/autotest_common.sh@10 -- # set +x 00:10:06.752 00:10:06.752 Latency(us) 00:10:06.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.752 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:10:06.752 Malloc_0 : 26.54 72551.36 283.40 0.00 0.00 3495.08 1045.46 503316.48 00:10:06.752 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:10:06.752 Null_1 : 26.60 66907.75 261.36 0.00 0.00 3820.71 214.55 59668.97 00:10:06.752 =================================================================================================================== 00:10:06.752 Total : 139459.11 544.76 0.00 0.00 3651.50 214.55 503316.48 00:10:06.752 0 00:10:06.752 07:56:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:06.752 07:56:12 -- bdev/blockdev.sh@459 -- # killprocess 56589 00:10:06.752 07:56:12 -- common/autotest_common.sh@926 -- # '[' -z 56589 ']' 00:10:06.752 07:56:12 -- common/autotest_common.sh@930 -- # kill -0 56589 00:10:06.752 07:56:12 -- common/autotest_common.sh@931 -- # uname 00:10:06.752 07:56:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:06.752 07:56:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56589 00:10:06.752 killing process with pid 56589 00:10:06.752 Received shutdown signal, test time was about 26.643554 seconds 00:10:06.752 00:10:06.753 Latency(us) 00:10:06.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.753 =================================================================================================================== 00:10:06.753 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:06.753 07:56:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:10:06.753 07:56:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:10:06.753 07:56:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56589' 00:10:06.753 07:56:12 -- common/autotest_common.sh@945 -- # kill 56589 00:10:06.753 07:56:12 -- common/autotest_common.sh@950 -- # wait 56589 00:10:07.012 07:56:12 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:10:07.012 00:10:07.012 real 0m28.057s 00:10:07.012 user 0m28.834s 00:10:07.012 sys 0m0.584s 00:10:07.012 07:56:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.012 ************************************ 00:10:07.012 END TEST bdev_qos 00:10:07.012 ************************************ 00:10:07.012 07:56:12 -- common/autotest_common.sh@10 -- # set +x 00:10:07.012 07:56:12 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:10:07.012 07:56:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:07.012 07:56:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:07.012 07:56:12 -- common/autotest_common.sh@10 -- # set +x 00:10:07.012 ************************************ 00:10:07.012 START TEST bdev_qd_sampling 00:10:07.012 ************************************ 00:10:07.012 07:56:12 -- common/autotest_common.sh@1104 -- # qd_sampling_test_suite '' 00:10:07.012 07:56:12 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:10:07.012 Process bdev QD sampling period testing pid: 57065 00:10:07.012 07:56:12 -- bdev/blockdev.sh@539 -- # QD_PID=57065 00:10:07.012 07:56:12 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 57065' 00:10:07.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.012 07:56:12 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:10:07.012 07:56:12 -- bdev/blockdev.sh@542 -- # waitforlisten 57065 00:10:07.012 07:56:12 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:10:07.012 07:56:12 -- common/autotest_common.sh@819 -- # '[' -z 57065 ']' 00:10:07.012 07:56:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.012 07:56:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:07.012 07:56:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.012 07:56:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:07.012 07:56:12 -- common/autotest_common.sh@10 -- # set +x 00:10:07.012 [2024-07-13 07:56:12.803656] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:07.012 [2024-07-13 07:56:12.803906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57065 ] 00:10:07.277 [2024-07-13 07:56:12.948876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:07.277 [2024-07-13 07:56:13.001862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.277 [2024-07-13 07:56:13.001868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.843 07:56:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:07.843 07:56:13 -- common/autotest_common.sh@852 -- # return 0 00:10:07.843 07:56:13 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:10:07.843 07:56:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:07.843 07:56:13 -- common/autotest_common.sh@10 -- # set +x 00:10:07.843 Malloc_QD 00:10:07.843 07:56:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:07.843 07:56:13 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:10:07.843 07:56:13 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_QD 00:10:07.843 07:56:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:07.843 07:56:13 -- common/autotest_common.sh@889 -- # local i 00:10:07.843 07:56:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:07.843 07:56:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:07.843 07:56:13 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:10:07.843 07:56:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:07.843 07:56:13 -- common/autotest_common.sh@10 -- # set +x 00:10:07.843 07:56:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:07.843 07:56:13 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:10:07.843 07:56:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:07.843 07:56:13 -- common/autotest_common.sh@10 -- # set +x 00:10:07.843 [ 00:10:07.843 { 00:10:07.843 "name": "Malloc_QD", 00:10:07.843 "aliases": [ 00:10:07.843 "e991fe75-b8e5-495f-94ed-d8906631b19e" 00:10:07.843 ], 00:10:07.843 "product_name": "Malloc disk", 00:10:07.843 "block_size": 512, 00:10:07.843 "num_blocks": 262144, 00:10:07.843 "uuid": "e991fe75-b8e5-495f-94ed-d8906631b19e", 00:10:07.843 "assigned_rate_limits": { 00:10:07.843 "rw_ios_per_sec": 0, 00:10:07.843 "rw_mbytes_per_sec": 0, 00:10:07.843 "r_mbytes_per_sec": 0, 00:10:07.843 "w_mbytes_per_sec": 0 00:10:07.843 }, 00:10:07.843 "claimed": false, 00:10:07.843 "zoned": false, 00:10:07.843 "supported_io_types": { 00:10:07.843 "read": true, 00:10:07.843 "write": true, 00:10:07.843 "unmap": true, 00:10:07.843 "write_zeroes": true, 00:10:07.843 "flush": true, 00:10:07.843 "reset": true, 00:10:07.843 "compare": false, 00:10:07.843 "compare_and_write": false, 00:10:07.843 "abort": true, 00:10:07.843 "nvme_admin": false, 00:10:07.843 "nvme_io": false 00:10:07.843 }, 00:10:07.843 "memory_domains": [ 00:10:07.843 { 00:10:07.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.844 "dma_device_type": 2 00:10:07.844 } 00:10:07.844 ], 00:10:07.844 "driver_specific": {} 00:10:07.844 } 00:10:07.844 ] 00:10:07.844 07:56:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:07.844 07:56:13 -- common/autotest_common.sh@895 -- # return 0 00:10:07.844 07:56:13 -- bdev/blockdev.sh@548 -- # sleep 2 00:10:07.844 07:56:13 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:08.101 Running I/O for 5 seconds... 00:10:10.003 07:56:15 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:10:10.003 07:56:15 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:10:10.003 07:56:15 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:10:10.003 07:56:15 -- bdev/blockdev.sh@519 -- # local iostats 00:10:10.003 07:56:15 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:10:10.003 07:56:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:10.003 07:56:15 -- common/autotest_common.sh@10 -- # set +x 00:10:10.003 07:56:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:10.003 07:56:15 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:10:10.003 07:56:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:10.003 07:56:15 -- common/autotest_common.sh@10 -- # set +x 00:10:10.003 07:56:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:10.003 07:56:15 -- bdev/blockdev.sh@523 -- # iostats='{ 00:10:10.003 "tick_rate": 2100000000, 00:10:10.003 "ticks": 1085770023062, 00:10:10.003 "bdevs": [ 00:10:10.003 { 00:10:10.003 "name": "Malloc_QD", 00:10:10.003 "bytes_read": 2269155840, 00:10:10.003 "num_read_ops": 553987, 00:10:10.003 "bytes_written": 0, 00:10:10.003 "num_write_ops": 0, 00:10:10.003 "bytes_unmapped": 0, 00:10:10.003 "num_unmap_ops": 0, 00:10:10.003 "bytes_copied": 0, 00:10:10.003 "num_copy_ops": 0, 00:10:10.003 "read_latency_ticks": 2038437096438, 00:10:10.003 "max_read_latency_ticks": 6647562, 00:10:10.003 "min_read_latency_ticks": 234292, 00:10:10.003 "write_latency_ticks": 0, 00:10:10.003 "max_write_latency_ticks": 0, 00:10:10.003 "min_write_latency_ticks": 0, 00:10:10.003 "unmap_latency_ticks": 0, 00:10:10.003 "max_unmap_latency_ticks": 0, 00:10:10.003 "min_unmap_latency_ticks": 0, 00:10:10.003 "copy_latency_ticks": 0, 00:10:10.003 "max_copy_latency_ticks": 0, 00:10:10.003 "min_copy_latency_ticks": 0, 00:10:10.003 "io_error": {}, 00:10:10.003 "queue_depth_polling_period": 10, 00:10:10.003 "queue_depth": 512, 00:10:10.003 "io_time": 90, 00:10:10.003 "weighted_io_time": 46080 00:10:10.003 } 00:10:10.003 ] 00:10:10.003 }' 00:10:10.003 07:56:15 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:10:10.003 07:56:15 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:10:10.003 07:56:15 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:10:10.003 07:56:15 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:10:10.003 07:56:15 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:10:10.003 07:56:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:10.003 07:56:15 -- common/autotest_common.sh@10 -- # set +x 00:10:10.003 00:10:10.003 Latency(us) 00:10:10.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.003 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:10:10.003 Malloc_QD : 1.98 144825.91 565.73 0.00 0.00 1764.88 446.66 3167.57 00:10:10.003 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:10:10.003 Malloc_QD : 1.98 146986.96 574.17 0.00 0.00 1738.95 323.78 2168.93 00:10:10.003 =================================================================================================================== 00:10:10.003 Total : 291812.87 1139.89 0.00 0.00 1751.82 323.78 3167.57 00:10:10.003 0 00:10:10.003 07:56:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:10.003 07:56:15 -- bdev/blockdev.sh@552 -- # killprocess 57065 00:10:10.003 07:56:15 -- common/autotest_common.sh@926 -- # '[' -z 57065 ']' 00:10:10.003 07:56:15 -- common/autotest_common.sh@930 -- # kill -0 57065 00:10:10.003 07:56:15 -- common/autotest_common.sh@931 -- # uname 00:10:10.003 07:56:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:10.003 07:56:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57065 00:10:10.003 killing process with pid 57065 00:10:10.003 Received shutdown signal, test time was about 2.022342 seconds 00:10:10.003 00:10:10.003 Latency(us) 00:10:10.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.003 =================================================================================================================== 00:10:10.003 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:10.003 07:56:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:10.003 07:56:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:10.003 07:56:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57065' 00:10:10.003 07:56:15 -- common/autotest_common.sh@945 -- # kill 57065 00:10:10.003 07:56:15 -- common/autotest_common.sh@950 -- # wait 57065 00:10:10.262 ************************************ 00:10:10.262 END TEST bdev_qd_sampling 00:10:10.262 ************************************ 00:10:10.262 07:56:15 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:10:10.262 00:10:10.262 real 0m3.296s 00:10:10.262 user 0m6.240s 00:10:10.262 sys 0m0.314s 00:10:10.262 07:56:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:10.262 07:56:15 -- common/autotest_common.sh@10 -- # set +x 00:10:10.262 07:56:16 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:10:10.262 07:56:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:10.262 07:56:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:10.262 07:56:16 -- common/autotest_common.sh@10 -- # set +x 00:10:10.262 ************************************ 00:10:10.262 START TEST bdev_error 00:10:10.262 ************************************ 00:10:10.262 07:56:16 -- common/autotest_common.sh@1104 -- # error_test_suite '' 00:10:10.262 07:56:16 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:10:10.262 Process error testing pid: 57142 00:10:10.262 07:56:16 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:10:10.262 07:56:16 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:10:10.262 07:56:16 -- bdev/blockdev.sh@470 -- # ERR_PID=57142 00:10:10.262 07:56:16 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 57142' 00:10:10.262 07:56:16 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:10:10.262 07:56:16 -- bdev/blockdev.sh@472 -- # waitforlisten 57142 00:10:10.262 07:56:16 -- common/autotest_common.sh@819 -- # '[' -z 57142 ']' 00:10:10.262 07:56:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.262 07:56:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:10.262 07:56:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.262 07:56:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:10.262 07:56:16 -- common/autotest_common.sh@10 -- # set +x 00:10:10.521 [2024-07-13 07:56:16.160083] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:10.521 [2024-07-13 07:56:16.160262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57142 ] 00:10:10.521 [2024-07-13 07:56:16.294009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.779 [2024-07-13 07:56:16.338897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.345 07:56:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:11.345 07:56:17 -- common/autotest_common.sh@852 -- # return 0 00:10:11.345 07:56:17 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:10:11.345 07:56:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:11.345 07:56:17 -- common/autotest_common.sh@10 -- # set +x 00:10:11.345 Dev_1 00:10:11.345 07:56:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:11.345 07:56:17 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:10:11.345 07:56:17 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:10:11.345 07:56:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:11.345 07:56:17 -- common/autotest_common.sh@889 -- # local i 00:10:11.345 07:56:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:11.345 07:56:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:11.345 07:56:17 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:10:11.345 07:56:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:11.345 07:56:17 -- common/autotest_common.sh@10 -- # set +x 00:10:11.345 07:56:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:11.345 07:56:17 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:10:11.345 07:56:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:11.345 07:56:17 -- common/autotest_common.sh@10 -- # set +x 00:10:11.345 [ 00:10:11.345 { 00:10:11.345 "name": "Dev_1", 00:10:11.345 "aliases": [ 00:10:11.345 "a32e8d44-6751-4f45-bcee-ae251fbc57e1" 00:10:11.345 ], 00:10:11.345 "product_name": "Malloc disk", 00:10:11.345 "block_size": 512, 00:10:11.345 "num_blocks": 262144, 00:10:11.345 "uuid": "a32e8d44-6751-4f45-bcee-ae251fbc57e1", 00:10:11.345 "assigned_rate_limits": { 00:10:11.345 "rw_ios_per_sec": 0, 00:10:11.345 "rw_mbytes_per_sec": 0, 00:10:11.345 "r_mbytes_per_sec": 0, 00:10:11.345 "w_mbytes_per_sec": 0 00:10:11.345 }, 00:10:11.345 "claimed": false, 00:10:11.346 "zoned": false, 00:10:11.346 "supported_io_types": { 00:10:11.346 "read": true, 00:10:11.346 "write": true, 00:10:11.346 "unmap": true, 00:10:11.346 "write_zeroes": true, 00:10:11.346 "flush": true, 00:10:11.346 "reset": true, 00:10:11.346 "compare": false, 00:10:11.346 "compare_and_write": false, 00:10:11.346 "abort": true, 00:10:11.346 "nvme_admin": false, 00:10:11.346 "nvme_io": false 00:10:11.346 }, 00:10:11.346 "memory_domains": [ 00:10:11.346 { 00:10:11.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.346 "dma_device_type": 2 00:10:11.346 } 00:10:11.346 ], 00:10:11.346 "driver_specific": {} 00:10:11.346 } 00:10:11.346 ] 00:10:11.346 07:56:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:11.346 07:56:17 -- common/autotest_common.sh@895 -- # return 0 00:10:11.346 07:56:17 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:10:11.346 07:56:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:11.346 07:56:17 -- common/autotest_common.sh@10 -- # set +x 00:10:11.346 true 00:10:11.346 07:56:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:11.346 07:56:17 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:10:11.346 07:56:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:11.346 07:56:17 -- common/autotest_common.sh@10 -- # set +x 00:10:11.604 Dev_2 00:10:11.604 07:56:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:11.604 07:56:17 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:10:11.604 07:56:17 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:10:11.604 07:56:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:11.604 07:56:17 -- common/autotest_common.sh@889 -- # local i 00:10:11.604 07:56:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:11.604 07:56:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:11.604 07:56:17 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:10:11.604 07:56:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:11.604 07:56:17 -- common/autotest_common.sh@10 -- # set +x 00:10:11.604 07:56:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:11.604 07:56:17 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:10:11.604 07:56:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:11.604 07:56:17 -- common/autotest_common.sh@10 -- # set +x 00:10:11.604 [ 00:10:11.604 { 00:10:11.604 "name": "Dev_2", 00:10:11.604 "aliases": [ 00:10:11.604 "978f97d9-82b4-4aeb-986b-e472ae640af9" 00:10:11.604 ], 00:10:11.604 "product_name": "Malloc disk", 00:10:11.604 "block_size": 512, 00:10:11.604 "num_blocks": 262144, 00:10:11.604 "uuid": "978f97d9-82b4-4aeb-986b-e472ae640af9", 00:10:11.604 "assigned_rate_limits": { 00:10:11.604 "rw_ios_per_sec": 0, 00:10:11.604 "rw_mbytes_per_sec": 0, 00:10:11.604 "r_mbytes_per_sec": 0, 00:10:11.604 "w_mbytes_per_sec": 0 00:10:11.604 }, 00:10:11.604 "claimed": false, 00:10:11.604 "zoned": false, 00:10:11.604 "supported_io_types": { 00:10:11.604 "read": true, 00:10:11.604 "write": true, 00:10:11.604 "unmap": true, 00:10:11.604 "write_zeroes": true, 00:10:11.604 "flush": true, 00:10:11.604 "reset": true, 00:10:11.604 "compare": false, 00:10:11.604 "compare_and_write": false, 00:10:11.604 "abort": true, 00:10:11.604 "nvme_admin": false, 00:10:11.604 "nvme_io": false 00:10:11.604 }, 00:10:11.604 "memory_domains": [ 00:10:11.604 { 00:10:11.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.604 "dma_device_type": 2 00:10:11.604 } 00:10:11.604 ], 00:10:11.604 "driver_specific": {} 00:10:11.604 } 00:10:11.604 ] 00:10:11.604 07:56:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:11.604 07:56:17 -- common/autotest_common.sh@895 -- # return 0 00:10:11.604 07:56:17 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:10:11.604 07:56:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:11.604 07:56:17 -- common/autotest_common.sh@10 -- # set +x 00:10:11.604 07:56:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:11.604 07:56:17 -- bdev/blockdev.sh@482 -- # sleep 1 00:10:11.604 07:56:17 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:10:11.604 Running I/O for 5 seconds... 00:10:12.537 Process is existed as continue on error is set. Pid: 57142 00:10:12.537 07:56:18 -- bdev/blockdev.sh@485 -- # kill -0 57142 00:10:12.537 07:56:18 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 57142' 00:10:12.537 07:56:18 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:10:12.537 07:56:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:12.537 07:56:18 -- common/autotest_common.sh@10 -- # set +x 00:10:12.537 07:56:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:12.537 07:56:18 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:10:12.537 07:56:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:12.537 07:56:18 -- common/autotest_common.sh@10 -- # set +x 00:10:12.537 07:56:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:12.537 07:56:18 -- bdev/blockdev.sh@495 -- # sleep 5 00:10:12.537 Timeout while waiting for response: 00:10:12.537 00:10:12.537 00:10:16.722 00:10:16.722 Latency(us) 00:10:16.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.722 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:10:16.722 EE_Dev_1 : 0.90 132981.22 519.46 5.58 0.00 119.56 73.63 374.49 00:10:16.722 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:10:16.722 Dev_2 : 5.00 289109.05 1129.33 0.00 0.00 54.58 18.90 15666.22 00:10:16.722 =================================================================================================================== 00:10:16.722 Total : 422090.28 1648.79 5.58 0.00 59.53 18.90 15666.22 00:10:17.684 07:56:23 -- bdev/blockdev.sh@497 -- # killprocess 57142 00:10:17.684 07:56:23 -- common/autotest_common.sh@926 -- # '[' -z 57142 ']' 00:10:17.684 07:56:23 -- common/autotest_common.sh@930 -- # kill -0 57142 00:10:17.684 07:56:23 -- common/autotest_common.sh@931 -- # uname 00:10:17.684 07:56:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:17.684 07:56:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57142 00:10:17.684 killing process with pid 57142 00:10:17.684 Received shutdown signal, test time was about 5.000000 seconds 00:10:17.684 00:10:17.684 Latency(us) 00:10:17.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.684 =================================================================================================================== 00:10:17.684 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:17.684 07:56:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:10:17.684 07:56:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:10:17.684 07:56:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57142' 00:10:17.684 07:56:23 -- common/autotest_common.sh@945 -- # kill 57142 00:10:17.684 07:56:23 -- common/autotest_common.sh@950 -- # wait 57142 00:10:17.684 Process error testing pid: 57247 00:10:17.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.684 07:56:23 -- bdev/blockdev.sh@501 -- # ERR_PID=57247 00:10:17.684 07:56:23 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 57247' 00:10:17.684 07:56:23 -- bdev/blockdev.sh@503 -- # waitforlisten 57247 00:10:17.684 07:56:23 -- common/autotest_common.sh@819 -- # '[' -z 57247 ']' 00:10:17.684 07:56:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.684 07:56:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:17.684 07:56:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.684 07:56:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:17.684 07:56:23 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:10:17.684 07:56:23 -- common/autotest_common.sh@10 -- # set +x 00:10:17.943 [2024-07-13 07:56:23.633717] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:17.943 [2024-07-13 07:56:23.633955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57247 ] 00:10:18.202 [2024-07-13 07:56:23.777859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.202 [2024-07-13 07:56:23.822848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.769 07:56:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:18.769 07:56:24 -- common/autotest_common.sh@852 -- # return 0 00:10:18.769 07:56:24 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:10:18.769 07:56:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.770 07:56:24 -- common/autotest_common.sh@10 -- # set +x 00:10:18.770 Dev_1 00:10:18.770 07:56:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.770 07:56:24 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:10:18.770 07:56:24 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:10:18.770 07:56:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:18.770 07:56:24 -- common/autotest_common.sh@889 -- # local i 00:10:18.770 07:56:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:18.770 07:56:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:18.770 07:56:24 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:10:18.770 07:56:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.770 07:56:24 -- common/autotest_common.sh@10 -- # set +x 00:10:18.770 07:56:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.770 07:56:24 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:10:18.770 07:56:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.770 07:56:24 -- common/autotest_common.sh@10 -- # set +x 00:10:18.770 [ 00:10:18.770 { 00:10:18.770 "name": "Dev_1", 00:10:18.770 "aliases": [ 00:10:18.770 "dde00820-e64f-4883-adfe-99e9be9f0ab3" 00:10:18.770 ], 00:10:18.770 "product_name": "Malloc disk", 00:10:18.770 "block_size": 512, 00:10:18.770 "num_blocks": 262144, 00:10:18.770 "uuid": "dde00820-e64f-4883-adfe-99e9be9f0ab3", 00:10:18.770 "assigned_rate_limits": { 00:10:18.770 "rw_ios_per_sec": 0, 00:10:18.770 "rw_mbytes_per_sec": 0, 00:10:18.770 "r_mbytes_per_sec": 0, 00:10:18.770 "w_mbytes_per_sec": 0 00:10:18.770 }, 00:10:18.770 "claimed": false, 00:10:18.770 "zoned": false, 00:10:18.770 "supported_io_types": { 00:10:18.770 "read": true, 00:10:18.770 "write": true, 00:10:18.770 "unmap": true, 00:10:18.770 "write_zeroes": true, 00:10:18.770 "flush": true, 00:10:18.770 "reset": true, 00:10:18.770 "compare": false, 00:10:18.770 "compare_and_write": false, 00:10:18.770 "abort": true, 00:10:18.770 "nvme_admin": false, 00:10:18.770 "nvme_io": false 00:10:18.770 }, 00:10:18.770 "memory_domains": [ 00:10:18.770 { 00:10:18.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.770 "dma_device_type": 2 00:10:18.770 } 00:10:18.770 ], 00:10:18.770 "driver_specific": {} 00:10:18.770 } 00:10:18.770 ] 00:10:18.770 07:56:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.770 07:56:24 -- common/autotest_common.sh@895 -- # return 0 00:10:18.770 07:56:24 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:10:18.770 07:56:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.770 07:56:24 -- common/autotest_common.sh@10 -- # set +x 00:10:18.770 true 00:10:18.770 07:56:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.770 07:56:24 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:10:18.770 07:56:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.770 07:56:24 -- common/autotest_common.sh@10 -- # set +x 00:10:18.770 Dev_2 00:10:18.770 07:56:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.770 07:56:24 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:10:18.770 07:56:24 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:10:18.770 07:56:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:18.770 07:56:24 -- common/autotest_common.sh@889 -- # local i 00:10:18.770 07:56:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:18.770 07:56:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:18.770 07:56:24 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:10:18.770 07:56:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.770 07:56:24 -- common/autotest_common.sh@10 -- # set +x 00:10:18.770 07:56:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.770 07:56:24 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:10:18.770 07:56:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.770 07:56:24 -- common/autotest_common.sh@10 -- # set +x 00:10:18.770 [ 00:10:18.770 { 00:10:18.770 "name": "Dev_2", 00:10:18.770 "aliases": [ 00:10:18.770 "13aa5077-c74e-41a7-9c01-c3304a53dbe2" 00:10:18.770 ], 00:10:18.770 "product_name": "Malloc disk", 00:10:18.770 "block_size": 512, 00:10:18.770 "num_blocks": 262144, 00:10:18.770 "uuid": "13aa5077-c74e-41a7-9c01-c3304a53dbe2", 00:10:18.770 "assigned_rate_limits": { 00:10:18.770 "rw_ios_per_sec": 0, 00:10:18.770 "rw_mbytes_per_sec": 0, 00:10:18.770 "r_mbytes_per_sec": 0, 00:10:18.770 "w_mbytes_per_sec": 0 00:10:18.770 }, 00:10:18.770 "claimed": false, 00:10:18.770 "zoned": false, 00:10:18.770 "supported_io_types": { 00:10:18.770 "read": true, 00:10:18.770 "write": true, 00:10:18.770 "unmap": true, 00:10:18.770 "write_zeroes": true, 00:10:18.770 "flush": true, 00:10:18.770 "reset": true, 00:10:18.770 "compare": false, 00:10:18.770 "compare_and_write": false, 00:10:18.770 "abort": true, 00:10:18.770 "nvme_admin": false, 00:10:18.770 "nvme_io": false 00:10:18.770 }, 00:10:18.770 "memory_domains": [ 00:10:18.770 { 00:10:18.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.770 "dma_device_type": 2 00:10:18.770 } 00:10:18.770 ], 00:10:18.770 "driver_specific": {} 00:10:18.770 } 00:10:18.770 ] 00:10:18.770 07:56:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.770 07:56:24 -- common/autotest_common.sh@895 -- # return 0 00:10:18.770 07:56:24 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:10:18.770 07:56:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.770 07:56:24 -- common/autotest_common.sh@10 -- # set +x 00:10:18.770 07:56:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.770 07:56:24 -- bdev/blockdev.sh@513 -- # NOT wait 57247 00:10:18.770 07:56:24 -- common/autotest_common.sh@640 -- # local es=0 00:10:18.770 07:56:24 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:10:18.770 07:56:24 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 57247 00:10:18.770 07:56:24 -- common/autotest_common.sh@628 -- # local arg=wait 00:10:18.770 07:56:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:18.770 07:56:24 -- common/autotest_common.sh@632 -- # type -t wait 00:10:18.770 07:56:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:18.770 07:56:24 -- common/autotest_common.sh@643 -- # wait 57247 00:10:18.770 Running I/O for 5 seconds... 00:10:19.029 task offset: 216184 on job bdev=EE_Dev_1 fails 00:10:19.029 00:10:19.029 Latency(us) 00:10:19.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:19.029 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:10:19.029 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:10:19.029 EE_Dev_1 : 0.00 40590.41 158.56 9225.09 0.00 269.59 74.61 487.62 00:10:19.029 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:10:19.029 Dev_2 : 0.00 48120.30 187.97 0.00 0.00 214.78 65.83 384.24 00:10:19.029 =================================================================================================================== 00:10:19.029 Total : 88710.71 346.53 9225.09 0.00 239.86 65.83 487.62 00:10:19.029 request: 00:10:19.029 { 00:10:19.029 "method": "perform_tests", 00:10:19.029 "req_id": 1 00:10:19.029 } 00:10:19.029 Got JSON-RPC error response 00:10:19.029 response: 00:10:19.029 { 00:10:19.029 "code": -32603, 00:10:19.029 "message": "bdevperf failed with error Operation not permitted" 00:10:19.029 } 00:10:19.029 [2024-07-13 07:56:24.581501] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:19.288 ************************************ 00:10:19.288 END TEST bdev_error 00:10:19.288 ************************************ 00:10:19.288 07:56:24 -- common/autotest_common.sh@643 -- # es=255 00:10:19.288 07:56:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:19.288 07:56:24 -- common/autotest_common.sh@652 -- # es=127 00:10:19.288 07:56:24 -- common/autotest_common.sh@653 -- # case "$es" in 00:10:19.288 07:56:24 -- common/autotest_common.sh@660 -- # es=1 00:10:19.288 07:56:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:19.288 00:10:19.288 real 0m8.836s 00:10:19.288 user 0m8.952s 00:10:19.288 sys 0m0.648s 00:10:19.288 07:56:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.288 07:56:24 -- common/autotest_common.sh@10 -- # set +x 00:10:19.288 07:56:24 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:10:19.288 07:56:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:19.288 07:56:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:19.288 07:56:24 -- common/autotest_common.sh@10 -- # set +x 00:10:19.288 ************************************ 00:10:19.288 START TEST bdev_stat 00:10:19.288 ************************************ 00:10:19.288 07:56:24 -- common/autotest_common.sh@1104 -- # stat_test_suite '' 00:10:19.288 07:56:24 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:10:19.288 Process Bdev IO statistics testing pid: 57291 00:10:19.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.288 07:56:24 -- bdev/blockdev.sh@594 -- # STAT_PID=57291 00:10:19.288 07:56:24 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 57291' 00:10:19.288 07:56:24 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:10:19.288 07:56:24 -- bdev/blockdev.sh@597 -- # waitforlisten 57291 00:10:19.288 07:56:24 -- common/autotest_common.sh@819 -- # '[' -z 57291 ']' 00:10:19.288 07:56:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.288 07:56:24 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:10:19.288 07:56:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:19.288 07:56:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.288 07:56:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:19.288 07:56:24 -- common/autotest_common.sh@10 -- # set +x 00:10:19.288 [2024-07-13 07:56:25.048316] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:19.288 [2024-07-13 07:56:25.048739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57291 ] 00:10:19.547 [2024-07-13 07:56:25.188521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:19.547 [2024-07-13 07:56:25.240769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.547 [2024-07-13 07:56:25.240776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.115 07:56:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:20.115 07:56:25 -- common/autotest_common.sh@852 -- # return 0 00:10:20.116 07:56:25 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:10:20.116 07:56:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:20.116 07:56:25 -- common/autotest_common.sh@10 -- # set +x 00:10:20.374 Malloc_STAT 00:10:20.374 07:56:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:20.374 07:56:25 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:10:20.374 07:56:25 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_STAT 00:10:20.374 07:56:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:20.374 07:56:25 -- common/autotest_common.sh@889 -- # local i 00:10:20.374 07:56:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:20.374 07:56:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:20.374 07:56:25 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:10:20.374 07:56:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:20.374 07:56:25 -- common/autotest_common.sh@10 -- # set +x 00:10:20.374 07:56:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:20.374 07:56:25 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:10:20.374 07:56:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:20.374 07:56:25 -- common/autotest_common.sh@10 -- # set +x 00:10:20.374 [ 00:10:20.374 { 00:10:20.374 "name": "Malloc_STAT", 00:10:20.374 "aliases": [ 00:10:20.374 "69496e21-1f95-484e-bc8e-c5e2876cc4ad" 00:10:20.374 ], 00:10:20.374 "product_name": "Malloc disk", 00:10:20.375 "block_size": 512, 00:10:20.375 "num_blocks": 262144, 00:10:20.375 "uuid": "69496e21-1f95-484e-bc8e-c5e2876cc4ad", 00:10:20.375 "assigned_rate_limits": { 00:10:20.375 "rw_ios_per_sec": 0, 00:10:20.375 "rw_mbytes_per_sec": 0, 00:10:20.375 "r_mbytes_per_sec": 0, 00:10:20.375 "w_mbytes_per_sec": 0 00:10:20.375 }, 00:10:20.375 "claimed": false, 00:10:20.375 "zoned": false, 00:10:20.375 "supported_io_types": { 00:10:20.375 "read": true, 00:10:20.375 "write": true, 00:10:20.375 "unmap": true, 00:10:20.375 "write_zeroes": true, 00:10:20.375 "flush": true, 00:10:20.375 "reset": true, 00:10:20.375 "compare": false, 00:10:20.375 "compare_and_write": false, 00:10:20.375 "abort": true, 00:10:20.375 "nvme_admin": false, 00:10:20.375 "nvme_io": false 00:10:20.375 }, 00:10:20.375 "memory_domains": [ 00:10:20.375 { 00:10:20.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.375 "dma_device_type": 2 00:10:20.375 } 00:10:20.375 ], 00:10:20.375 "driver_specific": {} 00:10:20.375 } 00:10:20.375 ] 00:10:20.375 07:56:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:20.375 07:56:25 -- common/autotest_common.sh@895 -- # return 0 00:10:20.375 07:56:25 -- bdev/blockdev.sh@603 -- # sleep 2 00:10:20.375 07:56:25 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:20.375 Running I/O for 10 seconds... 00:10:22.279 07:56:27 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:10:22.279 07:56:27 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:10:22.279 07:56:27 -- bdev/blockdev.sh@558 -- # local iostats 00:10:22.279 07:56:27 -- bdev/blockdev.sh@559 -- # local io_count1 00:10:22.279 07:56:27 -- bdev/blockdev.sh@560 -- # local io_count2 00:10:22.279 07:56:27 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:10:22.279 07:56:27 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:10:22.279 07:56:27 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:10:22.279 07:56:27 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:10:22.279 07:56:27 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:10:22.279 07:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:22.279 07:56:27 -- common/autotest_common.sh@10 -- # set +x 00:10:22.279 07:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:22.279 07:56:28 -- bdev/blockdev.sh@566 -- # iostats='{ 00:10:22.279 "tick_rate": 2100000000, 00:10:22.279 "ticks": 1111767960122, 00:10:22.279 "bdevs": [ 00:10:22.279 { 00:10:22.279 "name": "Malloc_STAT", 00:10:22.279 "bytes_read": 2054197760, 00:10:22.279 "num_read_ops": 501507, 00:10:22.279 "bytes_written": 0, 00:10:22.279 "num_write_ops": 0, 00:10:22.279 "bytes_unmapped": 0, 00:10:22.279 "num_unmap_ops": 0, 00:10:22.279 "bytes_copied": 0, 00:10:22.279 "num_copy_ops": 0, 00:10:22.279 "read_latency_ticks": 2034916003834, 00:10:22.279 "max_read_latency_ticks": 6454410, 00:10:22.279 "min_read_latency_ticks": 204292, 00:10:22.279 "write_latency_ticks": 0, 00:10:22.279 "max_write_latency_ticks": 0, 00:10:22.279 "min_write_latency_ticks": 0, 00:10:22.279 "unmap_latency_ticks": 0, 00:10:22.279 "max_unmap_latency_ticks": 0, 00:10:22.279 "min_unmap_latency_ticks": 0, 00:10:22.279 "copy_latency_ticks": 0, 00:10:22.279 "max_copy_latency_ticks": 0, 00:10:22.279 "min_copy_latency_ticks": 0, 00:10:22.279 "io_error": {} 00:10:22.279 } 00:10:22.279 ] 00:10:22.279 }' 00:10:22.279 07:56:28 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:10:22.279 07:56:28 -- bdev/blockdev.sh@567 -- # io_count1=501507 00:10:22.279 07:56:28 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:10:22.279 07:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:22.279 07:56:28 -- common/autotest_common.sh@10 -- # set +x 00:10:22.538 07:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:22.538 07:56:28 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:10:22.538 "tick_rate": 2100000000, 00:10:22.538 "ticks": 1111933542258, 00:10:22.538 "name": "Malloc_STAT", 00:10:22.538 "channels": [ 00:10:22.538 { 00:10:22.538 "thread_id": 2, 00:10:22.538 "bytes_read": 1051721728, 00:10:22.538 "num_read_ops": 256768, 00:10:22.538 "bytes_written": 0, 00:10:22.538 "num_write_ops": 0, 00:10:22.538 "bytes_unmapped": 0, 00:10:22.538 "num_unmap_ops": 0, 00:10:22.538 "bytes_copied": 0, 00:10:22.538 "num_copy_ops": 0, 00:10:22.538 "read_latency_ticks": 1059294408940, 00:10:22.538 "max_read_latency_ticks": 6454410, 00:10:22.538 "min_read_latency_ticks": 3592176, 00:10:22.538 "write_latency_ticks": 0, 00:10:22.538 "max_write_latency_ticks": 0, 00:10:22.538 "min_write_latency_ticks": 0, 00:10:22.538 "unmap_latency_ticks": 0, 00:10:22.538 "max_unmap_latency_ticks": 0, 00:10:22.538 "min_unmap_latency_ticks": 0, 00:10:22.538 "copy_latency_ticks": 0, 00:10:22.538 "max_copy_latency_ticks": 0, 00:10:22.538 "min_copy_latency_ticks": 0 00:10:22.538 }, 00:10:22.538 { 00:10:22.538 "thread_id": 3, 00:10:22.538 "bytes_read": 1089470464, 00:10:22.538 "num_read_ops": 265984, 00:10:22.538 "bytes_written": 0, 00:10:22.538 "num_write_ops": 0, 00:10:22.538 "bytes_unmapped": 0, 00:10:22.538 "num_unmap_ops": 0, 00:10:22.538 "bytes_copied": 0, 00:10:22.538 "num_copy_ops": 0, 00:10:22.538 "read_latency_ticks": 1060335849002, 00:10:22.538 "max_read_latency_ticks": 4671230, 00:10:22.538 "min_read_latency_ticks": 3058134, 00:10:22.538 "write_latency_ticks": 0, 00:10:22.538 "max_write_latency_ticks": 0, 00:10:22.538 "min_write_latency_ticks": 0, 00:10:22.538 "unmap_latency_ticks": 0, 00:10:22.538 "max_unmap_latency_ticks": 0, 00:10:22.538 "min_unmap_latency_ticks": 0, 00:10:22.538 "copy_latency_ticks": 0, 00:10:22.538 "max_copy_latency_ticks": 0, 00:10:22.538 "min_copy_latency_ticks": 0 00:10:22.538 } 00:10:22.538 ] 00:10:22.538 }' 00:10:22.538 07:56:28 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:10:22.538 07:56:28 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=256768 00:10:22.538 07:56:28 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=256768 00:10:22.538 07:56:28 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:10:22.538 07:56:28 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=265984 00:10:22.538 07:56:28 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=522752 00:10:22.538 07:56:28 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:10:22.538 07:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:22.538 07:56:28 -- common/autotest_common.sh@10 -- # set +x 00:10:22.538 07:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:22.538 07:56:28 -- bdev/blockdev.sh@575 -- # iostats='{ 00:10:22.538 "tick_rate": 2100000000, 00:10:22.538 "ticks": 1112210372120, 00:10:22.538 "bdevs": [ 00:10:22.538 { 00:10:22.538 "name": "Malloc_STAT", 00:10:22.538 "bytes_read": 2284884480, 00:10:22.538 "num_read_ops": 557827, 00:10:22.538 "bytes_written": 0, 00:10:22.538 "num_write_ops": 0, 00:10:22.538 "bytes_unmapped": 0, 00:10:22.538 "num_unmap_ops": 0, 00:10:22.538 "bytes_copied": 0, 00:10:22.538 "num_copy_ops": 0, 00:10:22.538 "read_latency_ticks": 2260948477512, 00:10:22.538 "max_read_latency_ticks": 6454410, 00:10:22.538 "min_read_latency_ticks": 204292, 00:10:22.538 "write_latency_ticks": 0, 00:10:22.538 "max_write_latency_ticks": 0, 00:10:22.538 "min_write_latency_ticks": 0, 00:10:22.538 "unmap_latency_ticks": 0, 00:10:22.538 "max_unmap_latency_ticks": 0, 00:10:22.538 "min_unmap_latency_ticks": 0, 00:10:22.538 "copy_latency_ticks": 0, 00:10:22.538 "max_copy_latency_ticks": 0, 00:10:22.538 "min_copy_latency_ticks": 0, 00:10:22.538 "io_error": {} 00:10:22.538 } 00:10:22.538 ] 00:10:22.538 }' 00:10:22.538 07:56:28 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:10:22.538 07:56:28 -- bdev/blockdev.sh@576 -- # io_count2=557827 00:10:22.538 07:56:28 -- bdev/blockdev.sh@581 -- # '[' 522752 -lt 501507 ']' 00:10:22.538 07:56:28 -- bdev/blockdev.sh@581 -- # '[' 522752 -gt 557827 ']' 00:10:22.538 07:56:28 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:10:22.538 07:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:22.538 07:56:28 -- common/autotest_common.sh@10 -- # set +x 00:10:22.538 00:10:22.538 Latency(us) 00:10:22.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.538 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:10:22.538 Malloc_STAT : 2.18 130302.84 509.00 0.00 0.00 1960.85 511.02 3073.95 00:10:22.538 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:10:22.538 Malloc_STAT : 2.18 134749.57 526.37 0.00 0.00 1897.04 323.78 2231.34 00:10:22.538 =================================================================================================================== 00:10:22.538 Total : 265052.41 1035.36 0.00 0.00 1928.41 323.78 3073.95 00:10:22.538 0 00:10:22.538 07:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:22.538 07:56:28 -- bdev/blockdev.sh@607 -- # killprocess 57291 00:10:22.538 07:56:28 -- common/autotest_common.sh@926 -- # '[' -z 57291 ']' 00:10:22.538 07:56:28 -- common/autotest_common.sh@930 -- # kill -0 57291 00:10:22.538 07:56:28 -- common/autotest_common.sh@931 -- # uname 00:10:22.538 07:56:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:22.538 07:56:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57291 00:10:22.538 killing process with pid 57291 00:10:22.538 07:56:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:22.538 07:56:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:22.538 07:56:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57291' 00:10:22.538 Received shutdown signal, test time was about 2.226007 seconds 00:10:22.538 00:10:22.538 Latency(us) 00:10:22.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.538 =================================================================================================================== 00:10:22.538 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:22.538 07:56:28 -- common/autotest_common.sh@945 -- # kill 57291 00:10:22.538 07:56:28 -- common/autotest_common.sh@950 -- # wait 57291 00:10:22.796 07:56:28 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:10:22.796 00:10:22.796 real 0m3.644s 00:10:22.796 user 0m7.208s 00:10:22.796 sys 0m0.351s 00:10:22.796 ************************************ 00:10:22.796 END TEST bdev_stat 00:10:22.796 ************************************ 00:10:22.796 07:56:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.796 07:56:28 -- common/autotest_common.sh@10 -- # set +x 00:10:22.796 07:56:28 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:10:22.796 07:56:28 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:10:22.796 07:56:28 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:10:22.796 07:56:28 -- bdev/blockdev.sh@809 -- # cleanup 00:10:22.796 07:56:28 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:22.796 07:56:28 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:22.796 07:56:28 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:10:22.796 07:56:28 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:10:22.796 07:56:28 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:10:22.796 07:56:28 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:10:22.796 ************************************ 00:10:22.796 END TEST blockdev_general 00:10:22.796 ************************************ 00:10:22.796 00:10:22.796 real 1m28.338s 00:10:22.796 user 4m30.774s 00:10:22.796 sys 0m8.010s 00:10:22.796 07:56:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.796 07:56:28 -- common/autotest_common.sh@10 -- # set +x 00:10:23.056 07:56:28 -- spdk/autotest.sh@196 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:10:23.056 07:56:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:23.056 07:56:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:23.056 07:56:28 -- common/autotest_common.sh@10 -- # set +x 00:10:23.056 ************************************ 00:10:23.056 START TEST bdev_raid 00:10:23.056 ************************************ 00:10:23.056 07:56:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:10:23.056 * Looking for test storage... 00:10:23.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:23.056 07:56:28 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:23.056 07:56:28 -- bdev/nbd_common.sh@6 -- # set -e 00:10:23.056 07:56:28 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:10:23.056 07:56:28 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:10:23.056 07:56:28 -- bdev/bdev_raid.sh@716 -- # uname -s 00:10:23.056 07:56:28 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:10:23.056 07:56:28 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:10:23.056 modprobe: FATAL: Module nbd not found. 00:10:23.056 07:56:28 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:10:23.056 07:56:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:23.056 07:56:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:23.056 07:56:28 -- common/autotest_common.sh@10 -- # set +x 00:10:23.056 ************************************ 00:10:23.056 START TEST raid0_resize_test 00:10:23.056 ************************************ 00:10:23.056 07:56:28 -- common/autotest_common.sh@1104 -- # raid0_resize_test 00:10:23.056 07:56:28 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:10:23.056 07:56:28 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:10:23.056 07:56:28 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:10:23.056 07:56:28 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:10:23.056 07:56:28 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:10:23.056 07:56:28 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:10:23.056 Process raid pid: 57442 00:10:23.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:23.056 07:56:28 -- bdev/bdev_raid.sh@301 -- # raid_pid=57442 00:10:23.056 07:56:28 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 57442' 00:10:23.056 07:56:28 -- bdev/bdev_raid.sh@303 -- # waitforlisten 57442 /var/tmp/spdk-raid.sock 00:10:23.056 07:56:28 -- common/autotest_common.sh@819 -- # '[' -z 57442 ']' 00:10:23.056 07:56:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:23.056 07:56:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:23.056 07:56:28 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:23.056 07:56:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:23.056 07:56:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:23.056 07:56:28 -- common/autotest_common.sh@10 -- # set +x 00:10:23.315 [2024-07-13 07:56:28.897929] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:23.315 [2024-07-13 07:56:28.898123] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.315 [2024-07-13 07:56:29.045218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.315 [2024-07-13 07:56:29.098934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.573 [2024-07-13 07:56:29.149011] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.140 07:56:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:24.140 07:56:29 -- common/autotest_common.sh@852 -- # return 0 00:10:24.140 07:56:29 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:10:24.140 Base_1 00:10:24.140 07:56:29 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:10:24.398 Base_2 00:10:24.398 07:56:30 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:10:24.657 [2024-07-13 07:56:30.278853] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:24.657 [2024-07-13 07:56:30.280577] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:24.657 [2024-07-13 07:56:30.280653] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000026480 00:10:24.657 [2024-07-13 07:56:30.280665] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:24.657 [2024-07-13 07:56:30.280783] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001d10 00:10:24.657 [2024-07-13 07:56:30.281021] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000026480 00:10:24.657 [2024-07-13 07:56:30.281033] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000026480 00:10:24.657 [2024-07-13 07:56:30.281133] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.657 07:56:30 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:10:24.657 [2024-07-13 07:56:30.442803] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:24.657 [2024-07-13 07:56:30.442839] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:10:24.657 true 00:10:24.657 07:56:30 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:10:24.657 07:56:30 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:10:24.916 [2024-07-13 07:56:30.722943] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.175 07:56:30 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:10:25.175 07:56:30 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:10:25.175 07:56:30 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:10:25.175 07:56:30 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:10:25.175 [2024-07-13 07:56:30.930852] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:25.175 [2024-07-13 07:56:30.930888] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:10:25.175 [2024-07-13 07:56:30.930922] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:10:25.176 [2024-07-13 07:56:30.930975] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:25.176 true 00:10:25.176 07:56:30 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:10:25.176 07:56:30 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:10:25.435 [2024-07-13 07:56:31.162990] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.435 07:56:31 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:10:25.435 07:56:31 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:10:25.435 07:56:31 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:10:25.435 07:56:31 -- bdev/bdev_raid.sh@332 -- # killprocess 57442 00:10:25.435 07:56:31 -- common/autotest_common.sh@926 -- # '[' -z 57442 ']' 00:10:25.435 07:56:31 -- common/autotest_common.sh@930 -- # kill -0 57442 00:10:25.435 07:56:31 -- common/autotest_common.sh@931 -- # uname 00:10:25.435 07:56:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:25.435 07:56:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57442 00:10:25.435 killing process with pid 57442 00:10:25.435 07:56:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:25.435 07:56:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:25.435 07:56:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57442' 00:10:25.435 07:56:31 -- common/autotest_common.sh@945 -- # kill 57442 00:10:25.435 07:56:31 -- common/autotest_common.sh@950 -- # wait 57442 00:10:25.435 [2024-07-13 07:56:31.213950] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:25.435 [2024-07-13 07:56:31.214037] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.435 [2024-07-13 07:56:31.214078] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.435 [2024-07-13 07:56:31.214090] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026480 name Raid, state offline 00:10:25.435 [2024-07-13 07:56:31.214436] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.694 ************************************ 00:10:25.694 END TEST raid0_resize_test 00:10:25.694 ************************************ 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@334 -- # return 0 00:10:25.694 00:10:25.694 real 0m2.676s 00:10:25.694 user 0m4.099s 00:10:25.694 sys 0m0.394s 00:10:25.694 07:56:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.694 07:56:31 -- common/autotest_common.sh@10 -- # set +x 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:10:25.694 07:56:31 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:25.694 07:56:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:25.694 07:56:31 -- common/autotest_common.sh@10 -- # set +x 00:10:25.694 ************************************ 00:10:25.694 START TEST raid_state_function_test 00:10:25.694 ************************************ 00:10:25.694 07:56:31 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 false 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:25.694 Process raid pid: 57517 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@226 -- # raid_pid=57517 00:10:25.694 07:56:31 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 57517' 00:10:25.695 07:56:31 -- bdev/bdev_raid.sh@228 -- # waitforlisten 57517 /var/tmp/spdk-raid.sock 00:10:25.695 07:56:31 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:25.695 07:56:31 -- common/autotest_common.sh@819 -- # '[' -z 57517 ']' 00:10:25.695 07:56:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:25.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:25.695 07:56:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:25.695 07:56:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:25.695 07:56:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:25.695 07:56:31 -- common/autotest_common.sh@10 -- # set +x 00:10:25.953 [2024-07-13 07:56:31.627438] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:25.953 [2024-07-13 07:56:31.627635] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.212 [2024-07-13 07:56:31.764832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.212 [2024-07-13 07:56:31.814310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.212 [2024-07-13 07:56:31.864133] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.780 07:56:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:26.780 07:56:32 -- common/autotest_common.sh@852 -- # return 0 00:10:26.780 07:56:32 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:27.039 [2024-07-13 07:56:32.735832] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:27.039 [2024-07-13 07:56:32.735919] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:27.039 [2024-07-13 07:56:32.735932] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:27.039 [2024-07-13 07:56:32.735956] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:27.039 07:56:32 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:27.039 07:56:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:27.039 07:56:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:27.039 07:56:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:27.039 07:56:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:27.039 07:56:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:27.039 07:56:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:27.039 07:56:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:27.039 07:56:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:27.039 07:56:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:27.039 07:56:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.039 07:56:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:27.297 07:56:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:27.297 "name": "Existed_Raid", 00:10:27.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.297 "strip_size_kb": 64, 00:10:27.297 "state": "configuring", 00:10:27.297 "raid_level": "raid0", 00:10:27.297 "superblock": false, 00:10:27.297 "num_base_bdevs": 2, 00:10:27.297 "num_base_bdevs_discovered": 0, 00:10:27.297 "num_base_bdevs_operational": 2, 00:10:27.297 "base_bdevs_list": [ 00:10:27.297 { 00:10:27.297 "name": "BaseBdev1", 00:10:27.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.297 "is_configured": false, 00:10:27.297 "data_offset": 0, 00:10:27.297 "data_size": 0 00:10:27.297 }, 00:10:27.297 { 00:10:27.297 "name": "BaseBdev2", 00:10:27.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.297 "is_configured": false, 00:10:27.297 "data_offset": 0, 00:10:27.297 "data_size": 0 00:10:27.297 } 00:10:27.297 ] 00:10:27.297 }' 00:10:27.297 07:56:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:27.297 07:56:32 -- common/autotest_common.sh@10 -- # set +x 00:10:28.231 07:56:33 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:28.231 [2024-07-13 07:56:33.931931] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:28.231 [2024-07-13 07:56:33.931981] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000025880 name Existed_Raid, state configuring 00:10:28.231 07:56:33 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:28.490 [2024-07-13 07:56:34.095983] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:28.490 [2024-07-13 07:56:34.096069] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:28.490 [2024-07-13 07:56:34.096082] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:28.490 [2024-07-13 07:56:34.096108] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:28.490 07:56:34 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:28.490 [2024-07-13 07:56:34.278928] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.490 BaseBdev1 00:10:28.490 07:56:34 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:10:28.490 07:56:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:10:28.490 07:56:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:28.490 07:56:34 -- common/autotest_common.sh@889 -- # local i 00:10:28.490 07:56:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:28.490 07:56:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:28.490 07:56:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:28.748 07:56:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:29.007 [ 00:10:29.007 { 00:10:29.007 "name": "BaseBdev1", 00:10:29.007 "aliases": [ 00:10:29.007 "700aab44-c4ef-4d27-abc8-7c21ebb1abb4" 00:10:29.007 ], 00:10:29.007 "product_name": "Malloc disk", 00:10:29.007 "block_size": 512, 00:10:29.007 "num_blocks": 65536, 00:10:29.007 "uuid": "700aab44-c4ef-4d27-abc8-7c21ebb1abb4", 00:10:29.007 "assigned_rate_limits": { 00:10:29.007 "rw_ios_per_sec": 0, 00:10:29.007 "rw_mbytes_per_sec": 0, 00:10:29.007 "r_mbytes_per_sec": 0, 00:10:29.007 "w_mbytes_per_sec": 0 00:10:29.007 }, 00:10:29.007 "claimed": true, 00:10:29.007 "claim_type": "exclusive_write", 00:10:29.007 "zoned": false, 00:10:29.007 "supported_io_types": { 00:10:29.007 "read": true, 00:10:29.007 "write": true, 00:10:29.007 "unmap": true, 00:10:29.007 "write_zeroes": true, 00:10:29.007 "flush": true, 00:10:29.007 "reset": true, 00:10:29.007 "compare": false, 00:10:29.007 "compare_and_write": false, 00:10:29.007 "abort": true, 00:10:29.007 "nvme_admin": false, 00:10:29.007 "nvme_io": false 00:10:29.007 }, 00:10:29.007 "memory_domains": [ 00:10:29.007 { 00:10:29.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.007 "dma_device_type": 2 00:10:29.007 } 00:10:29.007 ], 00:10:29.007 "driver_specific": {} 00:10:29.007 } 00:10:29.007 ] 00:10:29.007 07:56:34 -- common/autotest_common.sh@895 -- # return 0 00:10:29.007 07:56:34 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:29.007 07:56:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:29.007 07:56:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:29.007 07:56:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:29.007 07:56:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:29.007 07:56:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:29.007 07:56:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:29.007 07:56:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:29.007 07:56:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:29.007 07:56:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:29.007 07:56:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:29.007 07:56:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.266 07:56:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:29.266 "name": "Existed_Raid", 00:10:29.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.266 "strip_size_kb": 64, 00:10:29.266 "state": "configuring", 00:10:29.266 "raid_level": "raid0", 00:10:29.266 "superblock": false, 00:10:29.266 "num_base_bdevs": 2, 00:10:29.266 "num_base_bdevs_discovered": 1, 00:10:29.266 "num_base_bdevs_operational": 2, 00:10:29.266 "base_bdevs_list": [ 00:10:29.266 { 00:10:29.266 "name": "BaseBdev1", 00:10:29.266 "uuid": "700aab44-c4ef-4d27-abc8-7c21ebb1abb4", 00:10:29.266 "is_configured": true, 00:10:29.266 "data_offset": 0, 00:10:29.266 "data_size": 65536 00:10:29.266 }, 00:10:29.266 { 00:10:29.266 "name": "BaseBdev2", 00:10:29.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.266 "is_configured": false, 00:10:29.266 "data_offset": 0, 00:10:29.266 "data_size": 0 00:10:29.266 } 00:10:29.266 ] 00:10:29.266 }' 00:10:29.266 07:56:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:29.266 07:56:34 -- common/autotest_common.sh@10 -- # set +x 00:10:29.834 07:56:35 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:30.093 [2024-07-13 07:56:35.711363] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:30.093 [2024-07-13 07:56:35.711405] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026180 name Existed_Raid, state configuring 00:10:30.093 07:56:35 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:10:30.093 07:56:35 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:30.351 [2024-07-13 07:56:35.931422] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.351 [2024-07-13 07:56:35.933771] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:30.351 [2024-07-13 07:56:35.933860] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:30.351 07:56:35 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:10:30.351 07:56:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:30.351 07:56:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:30.351 07:56:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:30.351 07:56:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:30.351 07:56:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:30.351 07:56:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:30.351 07:56:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:30.351 07:56:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:30.351 07:56:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:30.351 07:56:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:30.351 07:56:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:30.351 07:56:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:30.351 07:56:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.609 07:56:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:30.609 "name": "Existed_Raid", 00:10:30.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.609 "strip_size_kb": 64, 00:10:30.609 "state": "configuring", 00:10:30.609 "raid_level": "raid0", 00:10:30.609 "superblock": false, 00:10:30.609 "num_base_bdevs": 2, 00:10:30.609 "num_base_bdevs_discovered": 1, 00:10:30.609 "num_base_bdevs_operational": 2, 00:10:30.609 "base_bdevs_list": [ 00:10:30.609 { 00:10:30.609 "name": "BaseBdev1", 00:10:30.609 "uuid": "700aab44-c4ef-4d27-abc8-7c21ebb1abb4", 00:10:30.609 "is_configured": true, 00:10:30.609 "data_offset": 0, 00:10:30.609 "data_size": 65536 00:10:30.609 }, 00:10:30.609 { 00:10:30.609 "name": "BaseBdev2", 00:10:30.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.609 "is_configured": false, 00:10:30.609 "data_offset": 0, 00:10:30.609 "data_size": 0 00:10:30.609 } 00:10:30.609 ] 00:10:30.609 }' 00:10:30.609 07:56:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:30.610 07:56:36 -- common/autotest_common.sh@10 -- # set +x 00:10:31.177 07:56:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:31.436 [2024-07-13 07:56:37.031619] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.436 [2024-07-13 07:56:37.031661] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000027080 00:10:31.436 [2024-07-13 07:56:37.031672] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:31.436 [2024-07-13 07:56:37.031775] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001eb0 00:10:31.436 [2024-07-13 07:56:37.031981] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000027080 00:10:31.436 [2024-07-13 07:56:37.031992] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000027080 00:10:31.436 [2024-07-13 07:56:37.032145] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.436 BaseBdev2 00:10:31.436 07:56:37 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:10:31.436 07:56:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:10:31.436 07:56:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:31.436 07:56:37 -- common/autotest_common.sh@889 -- # local i 00:10:31.436 07:56:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:31.436 07:56:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:31.436 07:56:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:31.696 07:56:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:31.696 [ 00:10:31.696 { 00:10:31.696 "name": "BaseBdev2", 00:10:31.696 "aliases": [ 00:10:31.696 "8990846e-66e0-4641-ade6-60e789f456a3" 00:10:31.696 ], 00:10:31.696 "product_name": "Malloc disk", 00:10:31.696 "block_size": 512, 00:10:31.696 "num_blocks": 65536, 00:10:31.696 "uuid": "8990846e-66e0-4641-ade6-60e789f456a3", 00:10:31.696 "assigned_rate_limits": { 00:10:31.696 "rw_ios_per_sec": 0, 00:10:31.696 "rw_mbytes_per_sec": 0, 00:10:31.696 "r_mbytes_per_sec": 0, 00:10:31.696 "w_mbytes_per_sec": 0 00:10:31.696 }, 00:10:31.696 "claimed": true, 00:10:31.696 "claim_type": "exclusive_write", 00:10:31.696 "zoned": false, 00:10:31.696 "supported_io_types": { 00:10:31.696 "read": true, 00:10:31.696 "write": true, 00:10:31.696 "unmap": true, 00:10:31.696 "write_zeroes": true, 00:10:31.696 "flush": true, 00:10:31.696 "reset": true, 00:10:31.696 "compare": false, 00:10:31.696 "compare_and_write": false, 00:10:31.696 "abort": true, 00:10:31.696 "nvme_admin": false, 00:10:31.696 "nvme_io": false 00:10:31.696 }, 00:10:31.696 "memory_domains": [ 00:10:31.696 { 00:10:31.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.696 "dma_device_type": 2 00:10:31.696 } 00:10:31.696 ], 00:10:31.696 "driver_specific": {} 00:10:31.696 } 00:10:31.696 ] 00:10:31.955 07:56:37 -- common/autotest_common.sh@895 -- # return 0 00:10:31.955 07:56:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:31.955 07:56:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:31.955 07:56:37 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:31.955 07:56:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:31.955 07:56:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:31.955 07:56:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:31.955 07:56:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:31.955 07:56:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:31.955 07:56:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:31.955 07:56:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:31.955 07:56:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:31.955 07:56:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:31.955 07:56:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:31.955 07:56:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.955 07:56:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:31.955 "name": "Existed_Raid", 00:10:31.955 "uuid": "036cdd3b-a0d8-4ba2-8b06-dca0c95dabc0", 00:10:31.955 "strip_size_kb": 64, 00:10:31.955 "state": "online", 00:10:31.955 "raid_level": "raid0", 00:10:31.955 "superblock": false, 00:10:31.955 "num_base_bdevs": 2, 00:10:31.955 "num_base_bdevs_discovered": 2, 00:10:31.955 "num_base_bdevs_operational": 2, 00:10:31.955 "base_bdevs_list": [ 00:10:31.955 { 00:10:31.955 "name": "BaseBdev1", 00:10:31.955 "uuid": "700aab44-c4ef-4d27-abc8-7c21ebb1abb4", 00:10:31.955 "is_configured": true, 00:10:31.955 "data_offset": 0, 00:10:31.955 "data_size": 65536 00:10:31.955 }, 00:10:31.955 { 00:10:31.955 "name": "BaseBdev2", 00:10:31.955 "uuid": "8990846e-66e0-4641-ade6-60e789f456a3", 00:10:31.956 "is_configured": true, 00:10:31.956 "data_offset": 0, 00:10:31.956 "data_size": 65536 00:10:31.956 } 00:10:31.956 ] 00:10:31.956 }' 00:10:31.956 07:56:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:31.956 07:56:37 -- common/autotest_common.sh@10 -- # set +x 00:10:32.889 07:56:38 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:32.889 [2024-07-13 07:56:38.672055] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.890 [2024-07-13 07:56:38.672092] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.890 [2024-07-13 07:56:38.672151] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.890 07:56:38 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:10:32.890 07:56:38 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:10:32.890 07:56:38 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:10:32.890 07:56:38 -- bdev/bdev_raid.sh@197 -- # return 1 00:10:32.890 07:56:38 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:10:32.890 07:56:38 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:32.890 07:56:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:32.890 07:56:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:10:32.890 07:56:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:32.890 07:56:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:32.890 07:56:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:10:32.890 07:56:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:32.890 07:56:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:32.890 07:56:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:32.890 07:56:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:33.149 07:56:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:33.149 07:56:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.421 07:56:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:33.421 "name": "Existed_Raid", 00:10:33.421 "uuid": "036cdd3b-a0d8-4ba2-8b06-dca0c95dabc0", 00:10:33.421 "strip_size_kb": 64, 00:10:33.421 "state": "offline", 00:10:33.421 "raid_level": "raid0", 00:10:33.421 "superblock": false, 00:10:33.421 "num_base_bdevs": 2, 00:10:33.421 "num_base_bdevs_discovered": 1, 00:10:33.421 "num_base_bdevs_operational": 1, 00:10:33.421 "base_bdevs_list": [ 00:10:33.421 { 00:10:33.421 "name": null, 00:10:33.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.421 "is_configured": false, 00:10:33.421 "data_offset": 0, 00:10:33.421 "data_size": 65536 00:10:33.421 }, 00:10:33.421 { 00:10:33.421 "name": "BaseBdev2", 00:10:33.421 "uuid": "8990846e-66e0-4641-ade6-60e789f456a3", 00:10:33.421 "is_configured": true, 00:10:33.421 "data_offset": 0, 00:10:33.421 "data_size": 65536 00:10:33.421 } 00:10:33.421 ] 00:10:33.421 }' 00:10:33.421 07:56:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:33.421 07:56:38 -- common/autotest_common.sh@10 -- # set +x 00:10:33.989 07:56:39 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:10:33.989 07:56:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:33.989 07:56:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:33.989 07:56:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:34.246 07:56:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:34.246 07:56:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:34.246 07:56:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:34.246 [2024-07-13 07:56:39.971638] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:34.246 [2024-07-13 07:56:39.971684] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027080 name Existed_Raid, state offline 00:10:34.246 07:56:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:34.246 07:56:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:34.246 07:56:40 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:34.246 07:56:40 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:10:34.504 07:56:40 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:10:34.504 07:56:40 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:10:34.504 07:56:40 -- bdev/bdev_raid.sh@287 -- # killprocess 57517 00:10:34.504 07:56:40 -- common/autotest_common.sh@926 -- # '[' -z 57517 ']' 00:10:34.504 07:56:40 -- common/autotest_common.sh@930 -- # kill -0 57517 00:10:34.504 07:56:40 -- common/autotest_common.sh@931 -- # uname 00:10:34.504 07:56:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:34.504 07:56:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57517 00:10:34.504 killing process with pid 57517 00:10:34.504 07:56:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:34.504 07:56:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:34.504 07:56:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57517' 00:10:34.504 07:56:40 -- common/autotest_common.sh@945 -- # kill 57517 00:10:34.504 07:56:40 -- common/autotest_common.sh@950 -- # wait 57517 00:10:34.504 [2024-07-13 07:56:40.251443] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:34.504 [2024-07-13 07:56:40.251518] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:34.762 ************************************ 00:10:34.762 END TEST raid_state_function_test 00:10:34.762 ************************************ 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@289 -- # return 0 00:10:34.762 00:10:34.762 real 0m8.958s 00:10:34.762 user 0m16.420s 00:10:34.762 sys 0m1.105s 00:10:34.762 07:56:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:34.762 07:56:40 -- common/autotest_common.sh@10 -- # set +x 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:10:34.762 07:56:40 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:34.762 07:56:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:34.762 07:56:40 -- common/autotest_common.sh@10 -- # set +x 00:10:34.762 ************************************ 00:10:34.762 START TEST raid_state_function_test_sb 00:10:34.762 ************************************ 00:10:34.762 07:56:40 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 true 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:34.762 Process raid pid: 57828 00:10:34.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@226 -- # raid_pid=57828 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 57828' 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@228 -- # waitforlisten 57828 /var/tmp/spdk-raid.sock 00:10:34.762 07:56:40 -- common/autotest_common.sh@819 -- # '[' -z 57828 ']' 00:10:34.762 07:56:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:34.762 07:56:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:34.762 07:56:40 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:34.762 07:56:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:34.762 07:56:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:34.762 07:56:40 -- common/autotest_common.sh@10 -- # set +x 00:10:35.020 [2024-07-13 07:56:40.640720] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:35.020 [2024-07-13 07:56:40.640989] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.020 [2024-07-13 07:56:40.787545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.278 [2024-07-13 07:56:40.846761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.278 [2024-07-13 07:56:40.897250] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.845 07:56:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:35.845 07:56:41 -- common/autotest_common.sh@852 -- # return 0 00:10:35.845 07:56:41 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:35.845 [2024-07-13 07:56:41.579352] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.845 [2024-07-13 07:56:41.579417] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.845 [2024-07-13 07:56:41.579428] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.845 [2024-07-13 07:56:41.579468] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.845 07:56:41 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:35.845 07:56:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:35.845 07:56:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:35.845 07:56:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:35.845 07:56:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:35.845 07:56:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:35.845 07:56:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:35.845 07:56:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:35.845 07:56:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:35.845 07:56:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:35.845 07:56:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:35.845 07:56:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.103 07:56:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:36.103 "name": "Existed_Raid", 00:10:36.103 "uuid": "70ccb739-bfd2-4309-9bae-8f3ba5ebb13e", 00:10:36.103 "strip_size_kb": 64, 00:10:36.103 "state": "configuring", 00:10:36.103 "raid_level": "raid0", 00:10:36.103 "superblock": true, 00:10:36.103 "num_base_bdevs": 2, 00:10:36.103 "num_base_bdevs_discovered": 0, 00:10:36.103 "num_base_bdevs_operational": 2, 00:10:36.103 "base_bdevs_list": [ 00:10:36.103 { 00:10:36.103 "name": "BaseBdev1", 00:10:36.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.103 "is_configured": false, 00:10:36.103 "data_offset": 0, 00:10:36.103 "data_size": 0 00:10:36.103 }, 00:10:36.103 { 00:10:36.103 "name": "BaseBdev2", 00:10:36.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.103 "is_configured": false, 00:10:36.103 "data_offset": 0, 00:10:36.103 "data_size": 0 00:10:36.103 } 00:10:36.103 ] 00:10:36.103 }' 00:10:36.103 07:56:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:36.103 07:56:41 -- common/autotest_common.sh@10 -- # set +x 00:10:36.669 07:56:42 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:36.925 [2024-07-13 07:56:42.623406] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:36.925 [2024-07-13 07:56:42.623447] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000025880 name Existed_Raid, state configuring 00:10:36.925 07:56:42 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:37.183 [2024-07-13 07:56:42.847461] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:37.183 [2024-07-13 07:56:42.847543] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:37.183 [2024-07-13 07:56:42.847554] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:37.183 [2024-07-13 07:56:42.847578] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:37.183 07:56:42 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:37.441 BaseBdev1 00:10:37.441 [2024-07-13 07:56:43.069766] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:37.441 07:56:43 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:10:37.441 07:56:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:10:37.441 07:56:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:37.441 07:56:43 -- common/autotest_common.sh@889 -- # local i 00:10:37.441 07:56:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:37.441 07:56:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:37.441 07:56:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:37.441 07:56:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:37.699 [ 00:10:37.699 { 00:10:37.699 "name": "BaseBdev1", 00:10:37.699 "aliases": [ 00:10:37.699 "88f53ac7-ebe6-48e5-b294-a682dade96ac" 00:10:37.699 ], 00:10:37.699 "product_name": "Malloc disk", 00:10:37.699 "block_size": 512, 00:10:37.699 "num_blocks": 65536, 00:10:37.699 "uuid": "88f53ac7-ebe6-48e5-b294-a682dade96ac", 00:10:37.699 "assigned_rate_limits": { 00:10:37.699 "rw_ios_per_sec": 0, 00:10:37.699 "rw_mbytes_per_sec": 0, 00:10:37.699 "r_mbytes_per_sec": 0, 00:10:37.699 "w_mbytes_per_sec": 0 00:10:37.699 }, 00:10:37.699 "claimed": true, 00:10:37.699 "claim_type": "exclusive_write", 00:10:37.699 "zoned": false, 00:10:37.699 "supported_io_types": { 00:10:37.699 "read": true, 00:10:37.699 "write": true, 00:10:37.699 "unmap": true, 00:10:37.699 "write_zeroes": true, 00:10:37.699 "flush": true, 00:10:37.699 "reset": true, 00:10:37.699 "compare": false, 00:10:37.699 "compare_and_write": false, 00:10:37.699 "abort": true, 00:10:37.699 "nvme_admin": false, 00:10:37.699 "nvme_io": false 00:10:37.699 }, 00:10:37.699 "memory_domains": [ 00:10:37.699 { 00:10:37.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.699 "dma_device_type": 2 00:10:37.699 } 00:10:37.699 ], 00:10:37.699 "driver_specific": {} 00:10:37.699 } 00:10:37.699 ] 00:10:37.699 07:56:43 -- common/autotest_common.sh@895 -- # return 0 00:10:37.699 07:56:43 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:37.699 07:56:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:37.699 07:56:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:37.699 07:56:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:37.699 07:56:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:37.699 07:56:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:37.699 07:56:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:37.699 07:56:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:37.699 07:56:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:37.699 07:56:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:37.699 07:56:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:37.699 07:56:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.957 07:56:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:37.957 "name": "Existed_Raid", 00:10:37.957 "uuid": "e379c390-1233-4b38-a70f-407fbec9d77c", 00:10:37.957 "strip_size_kb": 64, 00:10:37.957 "state": "configuring", 00:10:37.957 "raid_level": "raid0", 00:10:37.957 "superblock": true, 00:10:37.957 "num_base_bdevs": 2, 00:10:37.957 "num_base_bdevs_discovered": 1, 00:10:37.957 "num_base_bdevs_operational": 2, 00:10:37.957 "base_bdevs_list": [ 00:10:37.957 { 00:10:37.957 "name": "BaseBdev1", 00:10:37.957 "uuid": "88f53ac7-ebe6-48e5-b294-a682dade96ac", 00:10:37.957 "is_configured": true, 00:10:37.957 "data_offset": 2048, 00:10:37.957 "data_size": 63488 00:10:37.957 }, 00:10:37.957 { 00:10:37.957 "name": "BaseBdev2", 00:10:37.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.957 "is_configured": false, 00:10:37.957 "data_offset": 0, 00:10:37.957 "data_size": 0 00:10:37.957 } 00:10:37.957 ] 00:10:37.957 }' 00:10:37.957 07:56:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:37.957 07:56:43 -- common/autotest_common.sh@10 -- # set +x 00:10:38.520 07:56:44 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:38.777 [2024-07-13 07:56:44.522080] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.777 [2024-07-13 07:56:44.522125] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026180 name Existed_Raid, state configuring 00:10:38.777 07:56:44 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:10:38.777 07:56:44 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:39.035 07:56:44 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:39.292 BaseBdev1 00:10:39.292 07:56:44 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:10:39.292 07:56:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:10:39.292 07:56:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:39.292 07:56:44 -- common/autotest_common.sh@889 -- # local i 00:10:39.292 07:56:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:39.293 07:56:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:39.293 07:56:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:39.550 07:56:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:39.550 [ 00:10:39.550 { 00:10:39.550 "name": "BaseBdev1", 00:10:39.550 "aliases": [ 00:10:39.550 "6292f0d7-d6f8-4810-8e6b-e561684ecdf3" 00:10:39.550 ], 00:10:39.550 "product_name": "Malloc disk", 00:10:39.550 "block_size": 512, 00:10:39.550 "num_blocks": 65536, 00:10:39.550 "uuid": "6292f0d7-d6f8-4810-8e6b-e561684ecdf3", 00:10:39.550 "assigned_rate_limits": { 00:10:39.550 "rw_ios_per_sec": 0, 00:10:39.550 "rw_mbytes_per_sec": 0, 00:10:39.550 "r_mbytes_per_sec": 0, 00:10:39.550 "w_mbytes_per_sec": 0 00:10:39.550 }, 00:10:39.550 "claimed": false, 00:10:39.550 "zoned": false, 00:10:39.550 "supported_io_types": { 00:10:39.550 "read": true, 00:10:39.550 "write": true, 00:10:39.550 "unmap": true, 00:10:39.551 "write_zeroes": true, 00:10:39.551 "flush": true, 00:10:39.551 "reset": true, 00:10:39.551 "compare": false, 00:10:39.551 "compare_and_write": false, 00:10:39.551 "abort": true, 00:10:39.551 "nvme_admin": false, 00:10:39.551 "nvme_io": false 00:10:39.551 }, 00:10:39.551 "memory_domains": [ 00:10:39.551 { 00:10:39.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.551 "dma_device_type": 2 00:10:39.551 } 00:10:39.551 ], 00:10:39.551 "driver_specific": {} 00:10:39.551 } 00:10:39.551 ] 00:10:39.551 07:56:45 -- common/autotest_common.sh@895 -- # return 0 00:10:39.551 07:56:45 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:39.808 [2024-07-13 07:56:45.438954] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.808 [2024-07-13 07:56:45.440524] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:39.808 [2024-07-13 07:56:45.440584] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:39.808 07:56:45 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:10:39.808 07:56:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:39.808 07:56:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:39.808 07:56:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:39.808 07:56:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:39.808 07:56:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:39.808 07:56:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:39.808 07:56:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:39.808 07:56:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:39.808 07:56:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:39.809 07:56:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:39.809 07:56:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:39.809 07:56:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.809 07:56:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:40.066 07:56:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:40.066 "name": "Existed_Raid", 00:10:40.066 "uuid": "0b2b6f33-8824-43e6-9c6a-52ef0adc1f49", 00:10:40.066 "strip_size_kb": 64, 00:10:40.066 "state": "configuring", 00:10:40.066 "raid_level": "raid0", 00:10:40.066 "superblock": true, 00:10:40.066 "num_base_bdevs": 2, 00:10:40.066 "num_base_bdevs_discovered": 1, 00:10:40.066 "num_base_bdevs_operational": 2, 00:10:40.066 "base_bdevs_list": [ 00:10:40.066 { 00:10:40.066 "name": "BaseBdev1", 00:10:40.066 "uuid": "6292f0d7-d6f8-4810-8e6b-e561684ecdf3", 00:10:40.066 "is_configured": true, 00:10:40.066 "data_offset": 2048, 00:10:40.066 "data_size": 63488 00:10:40.066 }, 00:10:40.066 { 00:10:40.066 "name": "BaseBdev2", 00:10:40.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.066 "is_configured": false, 00:10:40.067 "data_offset": 0, 00:10:40.067 "data_size": 0 00:10:40.067 } 00:10:40.067 ] 00:10:40.067 }' 00:10:40.067 07:56:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:40.067 07:56:45 -- common/autotest_common.sh@10 -- # set +x 00:10:40.633 07:56:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:40.633 [2024-07-13 07:56:46.306928] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.633 [2024-07-13 07:56:46.307053] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000027680 00:10:40.633 [2024-07-13 07:56:46.307065] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:40.633 [2024-07-13 07:56:46.307135] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:10:40.633 [2024-07-13 07:56:46.307345] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000027680 00:10:40.633 [2024-07-13 07:56:46.307356] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000027680 00:10:40.633 [2024-07-13 07:56:46.307434] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.633 BaseBdev2 00:10:40.633 07:56:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:10:40.633 07:56:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:10:40.633 07:56:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:40.633 07:56:46 -- common/autotest_common.sh@889 -- # local i 00:10:40.633 07:56:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:40.633 07:56:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:40.633 07:56:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:40.891 07:56:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:41.150 [ 00:10:41.150 { 00:10:41.150 "name": "BaseBdev2", 00:10:41.150 "aliases": [ 00:10:41.150 "fed6a7de-3b9a-4c8f-a1e2-b1480eb1af60" 00:10:41.150 ], 00:10:41.150 "product_name": "Malloc disk", 00:10:41.150 "block_size": 512, 00:10:41.150 "num_blocks": 65536, 00:10:41.150 "uuid": "fed6a7de-3b9a-4c8f-a1e2-b1480eb1af60", 00:10:41.150 "assigned_rate_limits": { 00:10:41.150 "rw_ios_per_sec": 0, 00:10:41.150 "rw_mbytes_per_sec": 0, 00:10:41.150 "r_mbytes_per_sec": 0, 00:10:41.150 "w_mbytes_per_sec": 0 00:10:41.150 }, 00:10:41.150 "claimed": true, 00:10:41.150 "claim_type": "exclusive_write", 00:10:41.150 "zoned": false, 00:10:41.150 "supported_io_types": { 00:10:41.150 "read": true, 00:10:41.150 "write": true, 00:10:41.150 "unmap": true, 00:10:41.150 "write_zeroes": true, 00:10:41.150 "flush": true, 00:10:41.150 "reset": true, 00:10:41.150 "compare": false, 00:10:41.150 "compare_and_write": false, 00:10:41.150 "abort": true, 00:10:41.150 "nvme_admin": false, 00:10:41.150 "nvme_io": false 00:10:41.150 }, 00:10:41.150 "memory_domains": [ 00:10:41.150 { 00:10:41.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.150 "dma_device_type": 2 00:10:41.150 } 00:10:41.150 ], 00:10:41.150 "driver_specific": {} 00:10:41.150 } 00:10:41.150 ] 00:10:41.150 07:56:46 -- common/autotest_common.sh@895 -- # return 0 00:10:41.150 07:56:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:41.150 07:56:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:41.150 07:56:46 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:41.150 07:56:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:41.150 07:56:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:41.150 07:56:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:41.151 07:56:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:41.151 07:56:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:41.151 07:56:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:41.151 07:56:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:41.151 07:56:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:41.151 07:56:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:41.151 07:56:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:41.151 07:56:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.151 07:56:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:41.151 "name": "Existed_Raid", 00:10:41.151 "uuid": "0b2b6f33-8824-43e6-9c6a-52ef0adc1f49", 00:10:41.151 "strip_size_kb": 64, 00:10:41.151 "state": "online", 00:10:41.151 "raid_level": "raid0", 00:10:41.151 "superblock": true, 00:10:41.151 "num_base_bdevs": 2, 00:10:41.151 "num_base_bdevs_discovered": 2, 00:10:41.151 "num_base_bdevs_operational": 2, 00:10:41.151 "base_bdevs_list": [ 00:10:41.151 { 00:10:41.151 "name": "BaseBdev1", 00:10:41.151 "uuid": "6292f0d7-d6f8-4810-8e6b-e561684ecdf3", 00:10:41.151 "is_configured": true, 00:10:41.151 "data_offset": 2048, 00:10:41.151 "data_size": 63488 00:10:41.151 }, 00:10:41.151 { 00:10:41.151 "name": "BaseBdev2", 00:10:41.151 "uuid": "fed6a7de-3b9a-4c8f-a1e2-b1480eb1af60", 00:10:41.151 "is_configured": true, 00:10:41.151 "data_offset": 2048, 00:10:41.151 "data_size": 63488 00:10:41.151 } 00:10:41.151 ] 00:10:41.151 }' 00:10:41.151 07:56:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:41.151 07:56:46 -- common/autotest_common.sh@10 -- # set +x 00:10:41.749 07:56:47 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:42.007 [2024-07-13 07:56:47.695261] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:42.007 [2024-07-13 07:56:47.695295] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.007 [2024-07-13 07:56:47.695356] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.007 07:56:47 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:10:42.007 07:56:47 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:10:42.007 07:56:47 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:10:42.007 07:56:47 -- bdev/bdev_raid.sh@197 -- # return 1 00:10:42.007 07:56:47 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:10:42.007 07:56:47 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:42.007 07:56:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:42.007 07:56:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:10:42.007 07:56:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:42.007 07:56:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:42.007 07:56:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:10:42.007 07:56:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:42.007 07:56:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:42.007 07:56:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:42.007 07:56:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:42.007 07:56:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:42.007 07:56:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.266 07:56:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:42.266 "name": "Existed_Raid", 00:10:42.266 "uuid": "0b2b6f33-8824-43e6-9c6a-52ef0adc1f49", 00:10:42.266 "strip_size_kb": 64, 00:10:42.266 "state": "offline", 00:10:42.266 "raid_level": "raid0", 00:10:42.266 "superblock": true, 00:10:42.266 "num_base_bdevs": 2, 00:10:42.266 "num_base_bdevs_discovered": 1, 00:10:42.266 "num_base_bdevs_operational": 1, 00:10:42.266 "base_bdevs_list": [ 00:10:42.266 { 00:10:42.266 "name": null, 00:10:42.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.266 "is_configured": false, 00:10:42.266 "data_offset": 2048, 00:10:42.266 "data_size": 63488 00:10:42.266 }, 00:10:42.266 { 00:10:42.266 "name": "BaseBdev2", 00:10:42.266 "uuid": "fed6a7de-3b9a-4c8f-a1e2-b1480eb1af60", 00:10:42.266 "is_configured": true, 00:10:42.266 "data_offset": 2048, 00:10:42.266 "data_size": 63488 00:10:42.266 } 00:10:42.266 ] 00:10:42.266 }' 00:10:42.266 07:56:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:42.266 07:56:47 -- common/autotest_common.sh@10 -- # set +x 00:10:42.833 07:56:48 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:10:42.833 07:56:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:42.833 07:56:48 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:42.833 07:56:48 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:43.091 07:56:48 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:43.091 07:56:48 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:43.091 07:56:48 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:43.349 [2024-07-13 07:56:48.905039] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:43.349 [2024-07-13 07:56:48.905108] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027680 name Existed_Raid, state offline 00:10:43.349 07:56:48 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:43.349 07:56:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:43.349 07:56:48 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:43.349 07:56:48 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:10:43.349 07:56:49 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:10:43.349 07:56:49 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:10:43.349 07:56:49 -- bdev/bdev_raid.sh@287 -- # killprocess 57828 00:10:43.349 07:56:49 -- common/autotest_common.sh@926 -- # '[' -z 57828 ']' 00:10:43.349 07:56:49 -- common/autotest_common.sh@930 -- # kill -0 57828 00:10:43.349 07:56:49 -- common/autotest_common.sh@931 -- # uname 00:10:43.349 07:56:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:43.349 07:56:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57828 00:10:43.349 killing process with pid 57828 00:10:43.349 07:56:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:43.349 07:56:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:43.349 07:56:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57828' 00:10:43.349 07:56:49 -- common/autotest_common.sh@945 -- # kill 57828 00:10:43.349 07:56:49 -- common/autotest_common.sh@950 -- # wait 57828 00:10:43.349 [2024-07-13 07:56:49.119696] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:43.349 [2024-07-13 07:56:49.119776] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:43.608 ************************************ 00:10:43.608 END TEST raid_state_function_test_sb 00:10:43.608 ************************************ 00:10:43.608 07:56:49 -- bdev/bdev_raid.sh@289 -- # return 0 00:10:43.608 00:10:43.608 real 0m8.809s 00:10:43.608 user 0m16.029s 00:10:43.608 sys 0m1.132s 00:10:43.608 07:56:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.608 07:56:49 -- common/autotest_common.sh@10 -- # set +x 00:10:43.608 07:56:49 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:10:43.608 07:56:49 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:43.608 07:56:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:43.608 07:56:49 -- common/autotest_common.sh@10 -- # set +x 00:10:43.608 ************************************ 00:10:43.608 START TEST raid_superblock_test 00:10:43.608 ************************************ 00:10:43.608 07:56:49 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 2 00:10:43.608 07:56:49 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:10:43.608 07:56:49 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:10:43.608 07:56:49 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:10:43.608 07:56:49 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:10:43.608 07:56:49 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:10:43.608 07:56:49 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:10:43.608 07:56:49 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:10:43.608 07:56:49 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:10:43.608 07:56:49 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:10:43.608 07:56:49 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:10:43.608 07:56:49 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:10:43.608 07:56:49 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:10:43.608 07:56:49 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:10:43.608 07:56:49 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:10:43.608 07:56:49 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:10:43.608 07:56:49 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:10:43.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:43.608 07:56:49 -- bdev/bdev_raid.sh@357 -- # raid_pid=58143 00:10:43.608 07:56:49 -- bdev/bdev_raid.sh@358 -- # waitforlisten 58143 /var/tmp/spdk-raid.sock 00:10:43.608 07:56:49 -- common/autotest_common.sh@819 -- # '[' -z 58143 ']' 00:10:43.608 07:56:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:43.608 07:56:49 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:43.608 07:56:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:43.608 07:56:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:43.608 07:56:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:43.608 07:56:49 -- common/autotest_common.sh@10 -- # set +x 00:10:43.867 [2024-07-13 07:56:49.494704] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:43.867 [2024-07-13 07:56:49.494960] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58143 ] 00:10:43.867 [2024-07-13 07:56:49.639308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.125 [2024-07-13 07:56:49.698659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.125 [2024-07-13 07:56:49.748026] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.691 07:56:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:44.691 07:56:50 -- common/autotest_common.sh@852 -- # return 0 00:10:44.691 07:56:50 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:10:44.691 07:56:50 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:44.691 07:56:50 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:10:44.691 07:56:50 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:10:44.691 07:56:50 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:44.691 07:56:50 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:44.691 07:56:50 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:10:44.691 07:56:50 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:44.691 07:56:50 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:44.691 malloc1 00:10:44.691 07:56:50 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:44.950 [2024-07-13 07:56:50.669655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:44.950 [2024-07-13 07:56:50.669738] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.950 [2024-07-13 07:56:50.669808] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000026180 00:10:44.950 [2024-07-13 07:56:50.669847] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.950 pt1 00:10:44.950 [2024-07-13 07:56:50.671675] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.950 [2024-07-13 07:56:50.671718] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:44.950 07:56:50 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:10:44.950 07:56:50 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:44.950 07:56:50 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:10:44.950 07:56:50 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:10:44.950 07:56:50 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:44.950 07:56:50 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:44.950 07:56:50 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:10:44.950 07:56:50 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:44.950 07:56:50 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:45.209 malloc2 00:10:45.209 07:56:50 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:45.468 [2024-07-13 07:56:51.055023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:45.468 [2024-07-13 07:56:51.055099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.468 [2024-07-13 07:56:51.055146] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027f80 00:10:45.468 [2024-07-13 07:56:51.055191] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.468 [2024-07-13 07:56:51.056792] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.468 [2024-07-13 07:56:51.056840] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:45.468 pt2 00:10:45.468 07:56:51 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:10:45.468 07:56:51 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:45.468 07:56:51 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:10:45.468 [2024-07-13 07:56:51.219082] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:45.468 [2024-07-13 07:56:51.220536] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:45.468 [2024-07-13 07:56:51.220642] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000029480 00:10:45.468 [2024-07-13 07:56:51.220655] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:45.468 [2024-07-13 07:56:51.220763] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:10:45.468 [2024-07-13 07:56:51.220972] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000029480 00:10:45.468 [2024-07-13 07:56:51.220981] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000029480 00:10:45.468 [2024-07-13 07:56:51.221046] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.468 07:56:51 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:45.468 07:56:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:45.468 07:56:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:45.468 07:56:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:45.468 07:56:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:45.468 07:56:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:45.468 07:56:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:45.468 07:56:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:45.468 07:56:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:45.468 07:56:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:45.468 07:56:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.468 07:56:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:45.726 07:56:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:45.726 "name": "raid_bdev1", 00:10:45.726 "uuid": "7c66e256-297b-470b-9e77-6b57915efa7d", 00:10:45.726 "strip_size_kb": 64, 00:10:45.726 "state": "online", 00:10:45.726 "raid_level": "raid0", 00:10:45.726 "superblock": true, 00:10:45.726 "num_base_bdevs": 2, 00:10:45.726 "num_base_bdevs_discovered": 2, 00:10:45.726 "num_base_bdevs_operational": 2, 00:10:45.726 "base_bdevs_list": [ 00:10:45.726 { 00:10:45.726 "name": "pt1", 00:10:45.726 "uuid": "5c424b3b-5b9b-59d5-b567-778ce1896116", 00:10:45.726 "is_configured": true, 00:10:45.726 "data_offset": 2048, 00:10:45.726 "data_size": 63488 00:10:45.726 }, 00:10:45.726 { 00:10:45.726 "name": "pt2", 00:10:45.726 "uuid": "981ae6ce-309f-53bc-ba44-707ebd420c19", 00:10:45.726 "is_configured": true, 00:10:45.726 "data_offset": 2048, 00:10:45.726 "data_size": 63488 00:10:45.726 } 00:10:45.726 ] 00:10:45.726 }' 00:10:45.726 07:56:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:45.726 07:56:51 -- common/autotest_common.sh@10 -- # set +x 00:10:46.659 07:56:52 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:46.659 07:56:52 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:10:46.659 [2024-07-13 07:56:52.387338] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.659 07:56:52 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=7c66e256-297b-470b-9e77-6b57915efa7d 00:10:46.659 07:56:52 -- bdev/bdev_raid.sh@380 -- # '[' -z 7c66e256-297b-470b-9e77-6b57915efa7d ']' 00:10:46.659 07:56:52 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:46.917 [2024-07-13 07:56:52.551248] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:46.917 [2024-07-13 07:56:52.551278] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.917 [2024-07-13 07:56:52.551346] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.917 [2024-07-13 07:56:52.551381] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.917 [2024-07-13 07:56:52.551392] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000029480 name raid_bdev1, state offline 00:10:46.917 07:56:52 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:46.917 07:56:52 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:10:47.176 07:56:52 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:10:47.176 07:56:52 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:10:47.176 07:56:52 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:10:47.176 07:56:52 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:47.432 07:56:52 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:10:47.432 07:56:52 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:47.432 07:56:53 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:47.432 07:56:53 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:47.688 07:56:53 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:10:47.688 07:56:53 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:10:47.688 07:56:53 -- common/autotest_common.sh@640 -- # local es=0 00:10:47.688 07:56:53 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:10:47.688 07:56:53 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.688 07:56:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:47.688 07:56:53 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.688 07:56:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:47.688 07:56:53 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.688 07:56:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:47.688 07:56:53 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.688 07:56:53 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:47.688 07:56:53 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:10:47.945 [2024-07-13 07:56:53.523364] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:47.945 [2024-07-13 07:56:53.524882] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:47.945 [2024-07-13 07:56:53.524926] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:10:47.945 [2024-07-13 07:56:53.524976] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:10:47.945 [2024-07-13 07:56:53.525008] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:47.945 [2024-07-13 07:56:53.525018] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000029a80 name raid_bdev1, state configuring 00:10:47.945 request: 00:10:47.945 { 00:10:47.945 "name": "raid_bdev1", 00:10:47.945 "raid_level": "raid0", 00:10:47.945 "base_bdevs": [ 00:10:47.945 "malloc1", 00:10:47.945 "malloc2" 00:10:47.945 ], 00:10:47.945 "superblock": false, 00:10:47.945 "strip_size_kb": 64, 00:10:47.945 "method": "bdev_raid_create", 00:10:47.945 "req_id": 1 00:10:47.945 } 00:10:47.945 Got JSON-RPC error response 00:10:47.945 response: 00:10:47.945 { 00:10:47.945 "code": -17, 00:10:47.945 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:47.945 } 00:10:47.945 07:56:53 -- common/autotest_common.sh@643 -- # es=1 00:10:47.945 07:56:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:47.945 07:56:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:47.945 07:56:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:47.945 07:56:53 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:10:47.945 07:56:53 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:48.202 07:56:53 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:10:48.202 07:56:53 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:10:48.202 07:56:53 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:48.202 [2024-07-13 07:56:53.915366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:48.202 [2024-07-13 07:56:53.915631] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.202 [2024-07-13 07:56:53.915682] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002a980 00:10:48.202 [2024-07-13 07:56:53.915710] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.202 pt1 00:10:48.202 [2024-07-13 07:56:53.917260] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.202 [2024-07-13 07:56:53.917298] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:48.202 [2024-07-13 07:56:53.917348] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:10:48.202 [2024-07-13 07:56:53.917397] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:48.202 07:56:53 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:10:48.202 07:56:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:48.202 07:56:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:48.202 07:56:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:48.202 07:56:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:48.202 07:56:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:48.202 07:56:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:48.202 07:56:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:48.202 07:56:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:48.202 07:56:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:48.202 07:56:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:48.202 07:56:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.458 07:56:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:48.458 "name": "raid_bdev1", 00:10:48.458 "uuid": "7c66e256-297b-470b-9e77-6b57915efa7d", 00:10:48.458 "strip_size_kb": 64, 00:10:48.458 "state": "configuring", 00:10:48.458 "raid_level": "raid0", 00:10:48.458 "superblock": true, 00:10:48.458 "num_base_bdevs": 2, 00:10:48.458 "num_base_bdevs_discovered": 1, 00:10:48.458 "num_base_bdevs_operational": 2, 00:10:48.458 "base_bdevs_list": [ 00:10:48.458 { 00:10:48.458 "name": "pt1", 00:10:48.458 "uuid": "5c424b3b-5b9b-59d5-b567-778ce1896116", 00:10:48.458 "is_configured": true, 00:10:48.459 "data_offset": 2048, 00:10:48.459 "data_size": 63488 00:10:48.459 }, 00:10:48.459 { 00:10:48.459 "name": null, 00:10:48.459 "uuid": "981ae6ce-309f-53bc-ba44-707ebd420c19", 00:10:48.459 "is_configured": false, 00:10:48.459 "data_offset": 2048, 00:10:48.459 "data_size": 63488 00:10:48.459 } 00:10:48.459 ] 00:10:48.459 }' 00:10:48.459 07:56:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:48.459 07:56:54 -- common/autotest_common.sh@10 -- # set +x 00:10:49.024 07:56:54 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:10:49.024 07:56:54 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:10:49.024 07:56:54 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:49.024 07:56:54 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:49.024 [2024-07-13 07:56:54.835539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:49.024 [2024-07-13 07:56:54.835647] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.024 [2024-07-13 07:56:54.835691] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002c480 00:10:49.024 [2024-07-13 07:56:54.835716] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.024 [2024-07-13 07:56:54.835990] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.024 [2024-07-13 07:56:54.836026] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:49.024 [2024-07-13 07:56:54.836079] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:10:49.024 [2024-07-13 07:56:54.836105] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:49.024 [2024-07-13 07:56:54.836161] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002be80 00:10:49.024 [2024-07-13 07:56:54.836171] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:49.024 [2024-07-13 07:56:54.836222] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:10:49.024 [2024-07-13 07:56:54.836403] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002be80 00:10:49.024 [2024-07-13 07:56:54.836414] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002be80 00:10:49.282 [2024-07-13 07:56:54.836467] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.282 pt2 00:10:49.282 07:56:54 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:10:49.282 07:56:54 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:49.282 07:56:54 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:49.282 07:56:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:49.282 07:56:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:49.282 07:56:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:49.282 07:56:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:49.282 07:56:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:49.282 07:56:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:49.282 07:56:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:49.282 07:56:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:49.282 07:56:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:49.282 07:56:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.282 07:56:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:49.282 07:56:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:49.282 "name": "raid_bdev1", 00:10:49.282 "uuid": "7c66e256-297b-470b-9e77-6b57915efa7d", 00:10:49.282 "strip_size_kb": 64, 00:10:49.282 "state": "online", 00:10:49.282 "raid_level": "raid0", 00:10:49.282 "superblock": true, 00:10:49.282 "num_base_bdevs": 2, 00:10:49.282 "num_base_bdevs_discovered": 2, 00:10:49.282 "num_base_bdevs_operational": 2, 00:10:49.282 "base_bdevs_list": [ 00:10:49.282 { 00:10:49.282 "name": "pt1", 00:10:49.282 "uuid": "5c424b3b-5b9b-59d5-b567-778ce1896116", 00:10:49.282 "is_configured": true, 00:10:49.282 "data_offset": 2048, 00:10:49.282 "data_size": 63488 00:10:49.282 }, 00:10:49.282 { 00:10:49.282 "name": "pt2", 00:10:49.282 "uuid": "981ae6ce-309f-53bc-ba44-707ebd420c19", 00:10:49.282 "is_configured": true, 00:10:49.282 "data_offset": 2048, 00:10:49.282 "data_size": 63488 00:10:49.282 } 00:10:49.282 ] 00:10:49.282 }' 00:10:49.282 07:56:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:49.282 07:56:55 -- common/autotest_common.sh@10 -- # set +x 00:10:49.848 07:56:55 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:49.848 07:56:55 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:10:50.106 [2024-07-13 07:56:55.711757] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.106 07:56:55 -- bdev/bdev_raid.sh@430 -- # '[' 7c66e256-297b-470b-9e77-6b57915efa7d '!=' 7c66e256-297b-470b-9e77-6b57915efa7d ']' 00:10:50.106 07:56:55 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:10:50.106 07:56:55 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:10:50.106 07:56:55 -- bdev/bdev_raid.sh@197 -- # return 1 00:10:50.106 07:56:55 -- bdev/bdev_raid.sh@511 -- # killprocess 58143 00:10:50.106 07:56:55 -- common/autotest_common.sh@926 -- # '[' -z 58143 ']' 00:10:50.106 07:56:55 -- common/autotest_common.sh@930 -- # kill -0 58143 00:10:50.106 07:56:55 -- common/autotest_common.sh@931 -- # uname 00:10:50.106 07:56:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:50.106 07:56:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58143 00:10:50.106 killing process with pid 58143 00:10:50.106 07:56:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:50.106 07:56:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:50.106 07:56:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58143' 00:10:50.106 07:56:55 -- common/autotest_common.sh@945 -- # kill 58143 00:10:50.106 07:56:55 -- common/autotest_common.sh@950 -- # wait 58143 00:10:50.106 [2024-07-13 07:56:55.757751] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:50.106 [2024-07-13 07:56:55.757828] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.106 [2024-07-13 07:56:55.757859] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.106 [2024-07-13 07:56:55.757868] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002be80 name raid_bdev1, state offline 00:10:50.106 [2024-07-13 07:56:55.778392] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.363 ************************************ 00:10:50.363 END TEST raid_superblock_test 00:10:50.363 ************************************ 00:10:50.363 07:56:55 -- bdev/bdev_raid.sh@513 -- # return 0 00:10:50.363 00:10:50.363 real 0m6.614s 00:10:50.363 user 0m11.852s 00:10:50.363 sys 0m0.917s 00:10:50.363 07:56:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.363 07:56:55 -- common/autotest_common.sh@10 -- # set +x 00:10:50.363 07:56:56 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:10:50.363 07:56:56 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:10:50.363 07:56:56 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:50.363 07:56:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:50.363 07:56:56 -- common/autotest_common.sh@10 -- # set +x 00:10:50.363 ************************************ 00:10:50.363 START TEST raid_state_function_test 00:10:50.363 ************************************ 00:10:50.363 07:56:56 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 false 00:10:50.363 07:56:56 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:10:50.363 07:56:56 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:10:50.363 07:56:56 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:10:50.363 07:56:56 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:10:50.363 07:56:56 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:10:50.363 07:56:56 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:10:50.364 07:56:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:50.364 07:56:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:10:50.364 07:56:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:50.364 07:56:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:50.364 07:56:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:10:50.364 07:56:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:50.364 07:56:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:50.364 07:56:56 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:10:50.364 07:56:56 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:10:50.364 07:56:56 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:10:50.364 07:56:56 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:10:50.364 07:56:56 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:10:50.364 07:56:56 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:10:50.364 07:56:56 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:10:50.364 07:56:56 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:10:50.364 07:56:56 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:10:50.364 07:56:56 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:10:50.364 Process raid pid: 58367 00:10:50.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:50.364 07:56:56 -- bdev/bdev_raid.sh@226 -- # raid_pid=58367 00:10:50.364 07:56:56 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 58367' 00:10:50.364 07:56:56 -- bdev/bdev_raid.sh@228 -- # waitforlisten 58367 /var/tmp/spdk-raid.sock 00:10:50.364 07:56:56 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:50.364 07:56:56 -- common/autotest_common.sh@819 -- # '[' -z 58367 ']' 00:10:50.364 07:56:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:50.364 07:56:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:50.364 07:56:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:50.364 07:56:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:50.364 07:56:56 -- common/autotest_common.sh@10 -- # set +x 00:10:50.364 [2024-07-13 07:56:56.168799] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:50.364 [2024-07-13 07:56:56.169026] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.622 [2024-07-13 07:56:56.312413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.622 [2024-07-13 07:56:56.360522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.622 [2024-07-13 07:56:56.406677] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.188 07:56:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:51.188 07:56:56 -- common/autotest_common.sh@852 -- # return 0 00:10:51.188 07:56:56 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:51.446 [2024-07-13 07:56:57.141743] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:51.446 [2024-07-13 07:56:57.141811] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:51.446 [2024-07-13 07:56:57.141823] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:51.446 [2024-07-13 07:56:57.141872] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:51.446 07:56:57 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:51.446 07:56:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:51.446 07:56:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:51.446 07:56:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:51.446 07:56:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:51.446 07:56:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:51.446 07:56:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:51.446 07:56:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:51.446 07:56:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:51.446 07:56:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:51.446 07:56:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.446 07:56:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:51.704 07:56:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:51.704 "name": "Existed_Raid", 00:10:51.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.704 "strip_size_kb": 64, 00:10:51.704 "state": "configuring", 00:10:51.704 "raid_level": "concat", 00:10:51.704 "superblock": false, 00:10:51.704 "num_base_bdevs": 2, 00:10:51.704 "num_base_bdevs_discovered": 0, 00:10:51.704 "num_base_bdevs_operational": 2, 00:10:51.704 "base_bdevs_list": [ 00:10:51.704 { 00:10:51.704 "name": "BaseBdev1", 00:10:51.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.704 "is_configured": false, 00:10:51.704 "data_offset": 0, 00:10:51.704 "data_size": 0 00:10:51.704 }, 00:10:51.704 { 00:10:51.704 "name": "BaseBdev2", 00:10:51.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.704 "is_configured": false, 00:10:51.704 "data_offset": 0, 00:10:51.704 "data_size": 0 00:10:51.704 } 00:10:51.704 ] 00:10:51.704 }' 00:10:51.704 07:56:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:51.704 07:56:57 -- common/autotest_common.sh@10 -- # set +x 00:10:52.269 07:56:57 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:52.269 [2024-07-13 07:56:58.033853] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:52.269 [2024-07-13 07:56:58.033890] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000025880 name Existed_Raid, state configuring 00:10:52.269 07:56:58 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:52.526 [2024-07-13 07:56:58.197930] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.526 [2024-07-13 07:56:58.198007] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.526 [2024-07-13 07:56:58.198018] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.526 [2024-07-13 07:56:58.198040] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.526 07:56:58 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:52.783 BaseBdev1 00:10:52.783 [2024-07-13 07:56:58.364921] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.783 07:56:58 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:10:52.783 07:56:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:10:52.783 07:56:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:52.783 07:56:58 -- common/autotest_common.sh@889 -- # local i 00:10:52.783 07:56:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:52.783 07:56:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:52.783 07:56:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:52.783 07:56:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:53.041 [ 00:10:53.041 { 00:10:53.041 "name": "BaseBdev1", 00:10:53.041 "aliases": [ 00:10:53.041 "f4cbc9ca-20b3-4792-b609-849b8abaa94e" 00:10:53.041 ], 00:10:53.041 "product_name": "Malloc disk", 00:10:53.041 "block_size": 512, 00:10:53.041 "num_blocks": 65536, 00:10:53.041 "uuid": "f4cbc9ca-20b3-4792-b609-849b8abaa94e", 00:10:53.041 "assigned_rate_limits": { 00:10:53.041 "rw_ios_per_sec": 0, 00:10:53.041 "rw_mbytes_per_sec": 0, 00:10:53.041 "r_mbytes_per_sec": 0, 00:10:53.041 "w_mbytes_per_sec": 0 00:10:53.041 }, 00:10:53.041 "claimed": true, 00:10:53.041 "claim_type": "exclusive_write", 00:10:53.041 "zoned": false, 00:10:53.041 "supported_io_types": { 00:10:53.041 "read": true, 00:10:53.041 "write": true, 00:10:53.041 "unmap": true, 00:10:53.041 "write_zeroes": true, 00:10:53.041 "flush": true, 00:10:53.041 "reset": true, 00:10:53.041 "compare": false, 00:10:53.041 "compare_and_write": false, 00:10:53.041 "abort": true, 00:10:53.041 "nvme_admin": false, 00:10:53.041 "nvme_io": false 00:10:53.041 }, 00:10:53.041 "memory_domains": [ 00:10:53.041 { 00:10:53.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.041 "dma_device_type": 2 00:10:53.041 } 00:10:53.041 ], 00:10:53.041 "driver_specific": {} 00:10:53.041 } 00:10:53.041 ] 00:10:53.041 07:56:58 -- common/autotest_common.sh@895 -- # return 0 00:10:53.041 07:56:58 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:53.041 07:56:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:53.041 07:56:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:53.041 07:56:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:53.041 07:56:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:53.041 07:56:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:53.041 07:56:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:53.041 07:56:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:53.041 07:56:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:53.041 07:56:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:53.041 07:56:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.041 07:56:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:53.299 07:56:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:53.299 "name": "Existed_Raid", 00:10:53.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.299 "strip_size_kb": 64, 00:10:53.299 "state": "configuring", 00:10:53.299 "raid_level": "concat", 00:10:53.299 "superblock": false, 00:10:53.299 "num_base_bdevs": 2, 00:10:53.299 "num_base_bdevs_discovered": 1, 00:10:53.299 "num_base_bdevs_operational": 2, 00:10:53.299 "base_bdevs_list": [ 00:10:53.299 { 00:10:53.299 "name": "BaseBdev1", 00:10:53.299 "uuid": "f4cbc9ca-20b3-4792-b609-849b8abaa94e", 00:10:53.299 "is_configured": true, 00:10:53.299 "data_offset": 0, 00:10:53.299 "data_size": 65536 00:10:53.299 }, 00:10:53.299 { 00:10:53.299 "name": "BaseBdev2", 00:10:53.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.299 "is_configured": false, 00:10:53.299 "data_offset": 0, 00:10:53.299 "data_size": 0 00:10:53.299 } 00:10:53.299 ] 00:10:53.299 }' 00:10:53.299 07:56:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:53.299 07:56:58 -- common/autotest_common.sh@10 -- # set +x 00:10:53.867 07:56:59 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:54.124 [2024-07-13 07:56:59.697106] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:54.124 [2024-07-13 07:56:59.697148] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026180 name Existed_Raid, state configuring 00:10:54.124 07:56:59 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:10:54.124 07:56:59 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:54.124 [2024-07-13 07:56:59.849174] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.124 [2024-07-13 07:56:59.850676] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.124 [2024-07-13 07:56:59.850729] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.124 07:56:59 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:10:54.124 07:56:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:54.124 07:56:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:54.124 07:56:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:54.124 07:56:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:54.124 07:56:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:54.124 07:56:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:54.124 07:56:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:54.124 07:56:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:54.124 07:56:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:54.124 07:56:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:54.124 07:56:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:54.124 07:56:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.124 07:56:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:54.380 07:57:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:54.380 "name": "Existed_Raid", 00:10:54.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.380 "strip_size_kb": 64, 00:10:54.380 "state": "configuring", 00:10:54.380 "raid_level": "concat", 00:10:54.380 "superblock": false, 00:10:54.380 "num_base_bdevs": 2, 00:10:54.380 "num_base_bdevs_discovered": 1, 00:10:54.380 "num_base_bdevs_operational": 2, 00:10:54.380 "base_bdevs_list": [ 00:10:54.380 { 00:10:54.380 "name": "BaseBdev1", 00:10:54.380 "uuid": "f4cbc9ca-20b3-4792-b609-849b8abaa94e", 00:10:54.380 "is_configured": true, 00:10:54.380 "data_offset": 0, 00:10:54.380 "data_size": 65536 00:10:54.380 }, 00:10:54.380 { 00:10:54.380 "name": "BaseBdev2", 00:10:54.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.380 "is_configured": false, 00:10:54.380 "data_offset": 0, 00:10:54.381 "data_size": 0 00:10:54.381 } 00:10:54.381 ] 00:10:54.381 }' 00:10:54.381 07:57:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:54.381 07:57:00 -- common/autotest_common.sh@10 -- # set +x 00:10:54.946 07:57:00 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:55.205 [2024-07-13 07:57:00.845258] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.205 [2024-07-13 07:57:00.845299] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000027080 00:10:55.205 [2024-07-13 07:57:00.845308] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:55.205 [2024-07-13 07:57:00.845401] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001eb0 00:10:55.205 BaseBdev2 00:10:55.205 [2024-07-13 07:57:00.845860] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000027080 00:10:55.205 [2024-07-13 07:57:00.845882] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000027080 00:10:55.205 [2024-07-13 07:57:00.846029] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.205 07:57:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:10:55.205 07:57:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:10:55.205 07:57:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:55.205 07:57:00 -- common/autotest_common.sh@889 -- # local i 00:10:55.205 07:57:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:55.205 07:57:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:55.205 07:57:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:55.464 07:57:01 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:55.464 [ 00:10:55.464 { 00:10:55.464 "name": "BaseBdev2", 00:10:55.464 "aliases": [ 00:10:55.464 "1ef86b15-5f78-4fe1-b895-e77d00ae8078" 00:10:55.464 ], 00:10:55.464 "product_name": "Malloc disk", 00:10:55.464 "block_size": 512, 00:10:55.464 "num_blocks": 65536, 00:10:55.464 "uuid": "1ef86b15-5f78-4fe1-b895-e77d00ae8078", 00:10:55.464 "assigned_rate_limits": { 00:10:55.464 "rw_ios_per_sec": 0, 00:10:55.464 "rw_mbytes_per_sec": 0, 00:10:55.464 "r_mbytes_per_sec": 0, 00:10:55.464 "w_mbytes_per_sec": 0 00:10:55.464 }, 00:10:55.464 "claimed": true, 00:10:55.464 "claim_type": "exclusive_write", 00:10:55.464 "zoned": false, 00:10:55.464 "supported_io_types": { 00:10:55.464 "read": true, 00:10:55.464 "write": true, 00:10:55.464 "unmap": true, 00:10:55.464 "write_zeroes": true, 00:10:55.464 "flush": true, 00:10:55.464 "reset": true, 00:10:55.464 "compare": false, 00:10:55.464 "compare_and_write": false, 00:10:55.464 "abort": true, 00:10:55.464 "nvme_admin": false, 00:10:55.464 "nvme_io": false 00:10:55.464 }, 00:10:55.464 "memory_domains": [ 00:10:55.464 { 00:10:55.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.464 "dma_device_type": 2 00:10:55.464 } 00:10:55.464 ], 00:10:55.464 "driver_specific": {} 00:10:55.464 } 00:10:55.464 ] 00:10:55.464 07:57:01 -- common/autotest_common.sh@895 -- # return 0 00:10:55.464 07:57:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:55.464 07:57:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:55.464 07:57:01 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:55.464 07:57:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:55.464 07:57:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:55.464 07:57:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:55.464 07:57:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:55.464 07:57:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:55.464 07:57:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:55.464 07:57:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:55.464 07:57:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:55.464 07:57:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:55.464 07:57:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.464 07:57:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:55.723 07:57:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:55.723 "name": "Existed_Raid", 00:10:55.723 "uuid": "739f79c9-e07a-4602-aa3e-92fecc12afc7", 00:10:55.723 "strip_size_kb": 64, 00:10:55.723 "state": "online", 00:10:55.723 "raid_level": "concat", 00:10:55.723 "superblock": false, 00:10:55.723 "num_base_bdevs": 2, 00:10:55.723 "num_base_bdevs_discovered": 2, 00:10:55.723 "num_base_bdevs_operational": 2, 00:10:55.723 "base_bdevs_list": [ 00:10:55.723 { 00:10:55.723 "name": "BaseBdev1", 00:10:55.723 "uuid": "f4cbc9ca-20b3-4792-b609-849b8abaa94e", 00:10:55.723 "is_configured": true, 00:10:55.723 "data_offset": 0, 00:10:55.723 "data_size": 65536 00:10:55.723 }, 00:10:55.723 { 00:10:55.723 "name": "BaseBdev2", 00:10:55.723 "uuid": "1ef86b15-5f78-4fe1-b895-e77d00ae8078", 00:10:55.723 "is_configured": true, 00:10:55.723 "data_offset": 0, 00:10:55.723 "data_size": 65536 00:10:55.723 } 00:10:55.723 ] 00:10:55.723 }' 00:10:55.723 07:57:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:55.723 07:57:01 -- common/autotest_common.sh@10 -- # set +x 00:10:56.289 07:57:02 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:56.547 [2024-07-13 07:57:02.257573] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:56.547 [2024-07-13 07:57:02.257601] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.547 [2024-07-13 07:57:02.257651] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.547 07:57:02 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:10:56.547 07:57:02 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:10:56.547 07:57:02 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:10:56.547 07:57:02 -- bdev/bdev_raid.sh@197 -- # return 1 00:10:56.547 07:57:02 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:10:56.547 07:57:02 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:56.547 07:57:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:56.547 07:57:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:10:56.547 07:57:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:56.547 07:57:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:56.547 07:57:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:10:56.547 07:57:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:56.547 07:57:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:56.547 07:57:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:56.547 07:57:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:56.547 07:57:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:56.547 07:57:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.805 07:57:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:56.805 "name": "Existed_Raid", 00:10:56.805 "uuid": "739f79c9-e07a-4602-aa3e-92fecc12afc7", 00:10:56.805 "strip_size_kb": 64, 00:10:56.805 "state": "offline", 00:10:56.805 "raid_level": "concat", 00:10:56.805 "superblock": false, 00:10:56.805 "num_base_bdevs": 2, 00:10:56.805 "num_base_bdevs_discovered": 1, 00:10:56.805 "num_base_bdevs_operational": 1, 00:10:56.805 "base_bdevs_list": [ 00:10:56.805 { 00:10:56.805 "name": null, 00:10:56.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.805 "is_configured": false, 00:10:56.805 "data_offset": 0, 00:10:56.805 "data_size": 65536 00:10:56.805 }, 00:10:56.805 { 00:10:56.805 "name": "BaseBdev2", 00:10:56.805 "uuid": "1ef86b15-5f78-4fe1-b895-e77d00ae8078", 00:10:56.805 "is_configured": true, 00:10:56.805 "data_offset": 0, 00:10:56.805 "data_size": 65536 00:10:56.805 } 00:10:56.805 ] 00:10:56.805 }' 00:10:56.805 07:57:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:56.805 07:57:02 -- common/autotest_common.sh@10 -- # set +x 00:10:57.371 07:57:03 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:10:57.371 07:57:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:57.371 07:57:03 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:57.371 07:57:03 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:57.649 07:57:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:57.649 07:57:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.649 07:57:03 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:57.907 [2024-07-13 07:57:03.568580] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:57.907 [2024-07-13 07:57:03.568642] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027080 name Existed_Raid, state offline 00:10:57.907 07:57:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:57.907 07:57:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:57.907 07:57:03 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:57.907 07:57:03 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:10:58.174 07:57:03 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:10:58.174 07:57:03 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:10:58.174 07:57:03 -- bdev/bdev_raid.sh@287 -- # killprocess 58367 00:10:58.174 07:57:03 -- common/autotest_common.sh@926 -- # '[' -z 58367 ']' 00:10:58.174 07:57:03 -- common/autotest_common.sh@930 -- # kill -0 58367 00:10:58.174 07:57:03 -- common/autotest_common.sh@931 -- # uname 00:10:58.174 07:57:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:58.174 07:57:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58367 00:10:58.174 killing process with pid 58367 00:10:58.174 07:57:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:58.174 07:57:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:58.174 07:57:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58367' 00:10:58.174 07:57:03 -- common/autotest_common.sh@945 -- # kill 58367 00:10:58.174 07:57:03 -- common/autotest_common.sh@950 -- # wait 58367 00:10:58.174 [2024-07-13 07:57:03.835075] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:58.174 [2024-07-13 07:57:03.835203] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:58.470 ************************************ 00:10:58.470 END TEST raid_state_function_test 00:10:58.470 ************************************ 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@289 -- # return 0 00:10:58.470 00:10:58.470 real 0m8.009s 00:10:58.470 user 0m14.510s 00:10:58.470 sys 0m1.064s 00:10:58.470 07:57:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.470 07:57:04 -- common/autotest_common.sh@10 -- # set +x 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:10:58.470 07:57:04 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:58.470 07:57:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:58.470 07:57:04 -- common/autotest_common.sh@10 -- # set +x 00:10:58.470 ************************************ 00:10:58.470 START TEST raid_state_function_test_sb 00:10:58.470 ************************************ 00:10:58.470 07:57:04 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 true 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:58.470 Process raid pid: 58666 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@226 -- # raid_pid=58666 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 58666' 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:58.470 07:57:04 -- bdev/bdev_raid.sh@228 -- # waitforlisten 58666 /var/tmp/spdk-raid.sock 00:10:58.470 07:57:04 -- common/autotest_common.sh@819 -- # '[' -z 58666 ']' 00:10:58.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:58.470 07:57:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:58.470 07:57:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:58.470 07:57:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:58.470 07:57:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:58.470 07:57:04 -- common/autotest_common.sh@10 -- # set +x 00:10:58.470 [2024-07-13 07:57:04.232968] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:58.470 [2024-07-13 07:57:04.233230] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.729 [2024-07-13 07:57:04.373882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.729 [2024-07-13 07:57:04.421654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.729 [2024-07-13 07:57:04.467096] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.295 07:57:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:59.295 07:57:05 -- common/autotest_common.sh@852 -- # return 0 00:10:59.295 07:57:05 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:59.553 [2024-07-13 07:57:05.217528] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:59.553 [2024-07-13 07:57:05.217598] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:59.553 [2024-07-13 07:57:05.217611] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:59.553 [2024-07-13 07:57:05.217632] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:59.553 07:57:05 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:59.553 07:57:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:59.553 07:57:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:59.553 07:57:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:59.553 07:57:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:59.553 07:57:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:59.553 07:57:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:59.553 07:57:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:59.553 07:57:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:59.553 07:57:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:59.553 07:57:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.553 07:57:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:59.811 07:57:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:59.811 "name": "Existed_Raid", 00:10:59.811 "uuid": "a1c500b1-4cab-41f1-bd3f-9bde305089ea", 00:10:59.811 "strip_size_kb": 64, 00:10:59.811 "state": "configuring", 00:10:59.811 "raid_level": "concat", 00:10:59.811 "superblock": true, 00:10:59.811 "num_base_bdevs": 2, 00:10:59.811 "num_base_bdevs_discovered": 0, 00:10:59.811 "num_base_bdevs_operational": 2, 00:10:59.811 "base_bdevs_list": [ 00:10:59.811 { 00:10:59.811 "name": "BaseBdev1", 00:10:59.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.811 "is_configured": false, 00:10:59.811 "data_offset": 0, 00:10:59.811 "data_size": 0 00:10:59.811 }, 00:10:59.811 { 00:10:59.811 "name": "BaseBdev2", 00:10:59.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.811 "is_configured": false, 00:10:59.811 "data_offset": 0, 00:10:59.811 "data_size": 0 00:10:59.811 } 00:10:59.811 ] 00:10:59.811 }' 00:10:59.811 07:57:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:59.811 07:57:05 -- common/autotest_common.sh@10 -- # set +x 00:11:00.377 07:57:05 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:00.377 [2024-07-13 07:57:06.121489] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:00.377 [2024-07-13 07:57:06.121737] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000025880 name Existed_Raid, state configuring 00:11:00.377 07:57:06 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:00.635 [2024-07-13 07:57:06.325607] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:00.635 [2024-07-13 07:57:06.325674] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:00.635 [2024-07-13 07:57:06.325686] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:00.635 [2024-07-13 07:57:06.325711] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:00.635 07:57:06 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:00.893 BaseBdev1 00:11:00.893 [2024-07-13 07:57:06.495737] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:00.893 07:57:06 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:11:00.893 07:57:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:11:00.893 07:57:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:00.893 07:57:06 -- common/autotest_common.sh@889 -- # local i 00:11:00.893 07:57:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:00.893 07:57:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:00.893 07:57:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:01.152 07:57:06 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:01.152 [ 00:11:01.152 { 00:11:01.152 "name": "BaseBdev1", 00:11:01.152 "aliases": [ 00:11:01.152 "fc60638d-911e-4e5c-9576-8da05246acc2" 00:11:01.152 ], 00:11:01.152 "product_name": "Malloc disk", 00:11:01.152 "block_size": 512, 00:11:01.152 "num_blocks": 65536, 00:11:01.152 "uuid": "fc60638d-911e-4e5c-9576-8da05246acc2", 00:11:01.152 "assigned_rate_limits": { 00:11:01.152 "rw_ios_per_sec": 0, 00:11:01.152 "rw_mbytes_per_sec": 0, 00:11:01.152 "r_mbytes_per_sec": 0, 00:11:01.152 "w_mbytes_per_sec": 0 00:11:01.152 }, 00:11:01.152 "claimed": true, 00:11:01.152 "claim_type": "exclusive_write", 00:11:01.152 "zoned": false, 00:11:01.152 "supported_io_types": { 00:11:01.152 "read": true, 00:11:01.152 "write": true, 00:11:01.152 "unmap": true, 00:11:01.152 "write_zeroes": true, 00:11:01.152 "flush": true, 00:11:01.152 "reset": true, 00:11:01.152 "compare": false, 00:11:01.152 "compare_and_write": false, 00:11:01.152 "abort": true, 00:11:01.152 "nvme_admin": false, 00:11:01.152 "nvme_io": false 00:11:01.152 }, 00:11:01.152 "memory_domains": [ 00:11:01.152 { 00:11:01.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.152 "dma_device_type": 2 00:11:01.152 } 00:11:01.152 ], 00:11:01.152 "driver_specific": {} 00:11:01.152 } 00:11:01.152 ] 00:11:01.152 07:57:06 -- common/autotest_common.sh@895 -- # return 0 00:11:01.152 07:57:06 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:01.152 07:57:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:01.152 07:57:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:01.152 07:57:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:01.152 07:57:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:01.152 07:57:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:01.152 07:57:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:01.152 07:57:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:01.152 07:57:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:01.152 07:57:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:01.152 07:57:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.152 07:57:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:01.411 07:57:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:01.411 "name": "Existed_Raid", 00:11:01.411 "uuid": "7016f0c9-2af1-4856-a4ee-15c1727cfb8c", 00:11:01.411 "strip_size_kb": 64, 00:11:01.411 "state": "configuring", 00:11:01.411 "raid_level": "concat", 00:11:01.411 "superblock": true, 00:11:01.411 "num_base_bdevs": 2, 00:11:01.411 "num_base_bdevs_discovered": 1, 00:11:01.411 "num_base_bdevs_operational": 2, 00:11:01.411 "base_bdevs_list": [ 00:11:01.411 { 00:11:01.411 "name": "BaseBdev1", 00:11:01.411 "uuid": "fc60638d-911e-4e5c-9576-8da05246acc2", 00:11:01.411 "is_configured": true, 00:11:01.411 "data_offset": 2048, 00:11:01.411 "data_size": 63488 00:11:01.411 }, 00:11:01.411 { 00:11:01.411 "name": "BaseBdev2", 00:11:01.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.411 "is_configured": false, 00:11:01.411 "data_offset": 0, 00:11:01.411 "data_size": 0 00:11:01.411 } 00:11:01.411 ] 00:11:01.411 }' 00:11:01.411 07:57:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:01.411 07:57:07 -- common/autotest_common.sh@10 -- # set +x 00:11:01.978 07:57:07 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:02.237 [2024-07-13 07:57:07.919965] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.237 [2024-07-13 07:57:07.920007] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026180 name Existed_Raid, state configuring 00:11:02.237 07:57:07 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:11:02.237 07:57:07 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:02.496 07:57:08 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:02.496 BaseBdev1 00:11:02.755 07:57:08 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:11:02.755 07:57:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:11:02.755 07:57:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:02.755 07:57:08 -- common/autotest_common.sh@889 -- # local i 00:11:02.755 07:57:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:02.755 07:57:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:02.755 07:57:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:02.755 07:57:08 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:03.014 [ 00:11:03.014 { 00:11:03.014 "name": "BaseBdev1", 00:11:03.014 "aliases": [ 00:11:03.014 "ffbeabce-97e7-4088-bc58-5a45629c2c1e" 00:11:03.014 ], 00:11:03.014 "product_name": "Malloc disk", 00:11:03.014 "block_size": 512, 00:11:03.014 "num_blocks": 65536, 00:11:03.014 "uuid": "ffbeabce-97e7-4088-bc58-5a45629c2c1e", 00:11:03.014 "assigned_rate_limits": { 00:11:03.014 "rw_ios_per_sec": 0, 00:11:03.014 "rw_mbytes_per_sec": 0, 00:11:03.014 "r_mbytes_per_sec": 0, 00:11:03.014 "w_mbytes_per_sec": 0 00:11:03.014 }, 00:11:03.014 "claimed": false, 00:11:03.014 "zoned": false, 00:11:03.014 "supported_io_types": { 00:11:03.014 "read": true, 00:11:03.014 "write": true, 00:11:03.014 "unmap": true, 00:11:03.014 "write_zeroes": true, 00:11:03.014 "flush": true, 00:11:03.014 "reset": true, 00:11:03.014 "compare": false, 00:11:03.014 "compare_and_write": false, 00:11:03.014 "abort": true, 00:11:03.014 "nvme_admin": false, 00:11:03.014 "nvme_io": false 00:11:03.014 }, 00:11:03.014 "memory_domains": [ 00:11:03.014 { 00:11:03.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.014 "dma_device_type": 2 00:11:03.014 } 00:11:03.014 ], 00:11:03.014 "driver_specific": {} 00:11:03.014 } 00:11:03.014 ] 00:11:03.014 07:57:08 -- common/autotest_common.sh@895 -- # return 0 00:11:03.014 07:57:08 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:03.014 [2024-07-13 07:57:08.796067] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.014 [2024-07-13 07:57:08.797726] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:03.014 [2024-07-13 07:57:08.797782] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:03.014 07:57:08 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:11:03.014 07:57:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:03.014 07:57:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:03.014 07:57:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:03.014 07:57:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:03.014 07:57:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:03.014 07:57:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:03.014 07:57:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:03.014 07:57:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:03.014 07:57:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:03.014 07:57:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:03.014 07:57:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:03.014 07:57:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:03.014 07:57:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.273 07:57:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:03.273 "name": "Existed_Raid", 00:11:03.273 "uuid": "df5a34eb-9caa-4db3-aae7-fa9dc3d8f835", 00:11:03.273 "strip_size_kb": 64, 00:11:03.273 "state": "configuring", 00:11:03.273 "raid_level": "concat", 00:11:03.273 "superblock": true, 00:11:03.273 "num_base_bdevs": 2, 00:11:03.273 "num_base_bdevs_discovered": 1, 00:11:03.273 "num_base_bdevs_operational": 2, 00:11:03.273 "base_bdevs_list": [ 00:11:03.273 { 00:11:03.273 "name": "BaseBdev1", 00:11:03.273 "uuid": "ffbeabce-97e7-4088-bc58-5a45629c2c1e", 00:11:03.273 "is_configured": true, 00:11:03.273 "data_offset": 2048, 00:11:03.273 "data_size": 63488 00:11:03.273 }, 00:11:03.273 { 00:11:03.273 "name": "BaseBdev2", 00:11:03.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.273 "is_configured": false, 00:11:03.273 "data_offset": 0, 00:11:03.273 "data_size": 0 00:11:03.273 } 00:11:03.273 ] 00:11:03.273 }' 00:11:03.273 07:57:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:03.273 07:57:09 -- common/autotest_common.sh@10 -- # set +x 00:11:04.210 07:57:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:04.468 BaseBdev2 00:11:04.468 [2024-07-13 07:57:10.035949] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.468 [2024-07-13 07:57:10.036099] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000027680 00:11:04.468 [2024-07-13 07:57:10.036127] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:04.468 [2024-07-13 07:57:10.036197] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:11:04.468 [2024-07-13 07:57:10.036417] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000027680 00:11:04.468 [2024-07-13 07:57:10.036428] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000027680 00:11:04.468 [2024-07-13 07:57:10.036505] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.468 07:57:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:11:04.468 07:57:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:11:04.468 07:57:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:04.468 07:57:10 -- common/autotest_common.sh@889 -- # local i 00:11:04.468 07:57:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:04.468 07:57:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:04.468 07:57:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:04.468 07:57:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:04.727 [ 00:11:04.727 { 00:11:04.727 "name": "BaseBdev2", 00:11:04.727 "aliases": [ 00:11:04.727 "dade8bfe-3f44-4010-909d-8d7afd215607" 00:11:04.727 ], 00:11:04.727 "product_name": "Malloc disk", 00:11:04.727 "block_size": 512, 00:11:04.727 "num_blocks": 65536, 00:11:04.727 "uuid": "dade8bfe-3f44-4010-909d-8d7afd215607", 00:11:04.727 "assigned_rate_limits": { 00:11:04.727 "rw_ios_per_sec": 0, 00:11:04.727 "rw_mbytes_per_sec": 0, 00:11:04.727 "r_mbytes_per_sec": 0, 00:11:04.727 "w_mbytes_per_sec": 0 00:11:04.727 }, 00:11:04.727 "claimed": true, 00:11:04.727 "claim_type": "exclusive_write", 00:11:04.727 "zoned": false, 00:11:04.727 "supported_io_types": { 00:11:04.727 "read": true, 00:11:04.727 "write": true, 00:11:04.727 "unmap": true, 00:11:04.727 "write_zeroes": true, 00:11:04.727 "flush": true, 00:11:04.727 "reset": true, 00:11:04.727 "compare": false, 00:11:04.727 "compare_and_write": false, 00:11:04.727 "abort": true, 00:11:04.727 "nvme_admin": false, 00:11:04.727 "nvme_io": false 00:11:04.727 }, 00:11:04.727 "memory_domains": [ 00:11:04.727 { 00:11:04.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.727 "dma_device_type": 2 00:11:04.727 } 00:11:04.727 ], 00:11:04.727 "driver_specific": {} 00:11:04.727 } 00:11:04.727 ] 00:11:04.727 07:57:10 -- common/autotest_common.sh@895 -- # return 0 00:11:04.727 07:57:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:04.727 07:57:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:04.727 07:57:10 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:11:04.727 07:57:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:04.727 07:57:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:04.727 07:57:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:04.727 07:57:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:04.727 07:57:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:04.727 07:57:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:04.727 07:57:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:04.727 07:57:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:04.727 07:57:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:04.727 07:57:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.727 07:57:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:04.985 07:57:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:04.985 "name": "Existed_Raid", 00:11:04.985 "uuid": "df5a34eb-9caa-4db3-aae7-fa9dc3d8f835", 00:11:04.985 "strip_size_kb": 64, 00:11:04.985 "state": "online", 00:11:04.985 "raid_level": "concat", 00:11:04.985 "superblock": true, 00:11:04.985 "num_base_bdevs": 2, 00:11:04.985 "num_base_bdevs_discovered": 2, 00:11:04.985 "num_base_bdevs_operational": 2, 00:11:04.985 "base_bdevs_list": [ 00:11:04.985 { 00:11:04.985 "name": "BaseBdev1", 00:11:04.985 "uuid": "ffbeabce-97e7-4088-bc58-5a45629c2c1e", 00:11:04.985 "is_configured": true, 00:11:04.985 "data_offset": 2048, 00:11:04.985 "data_size": 63488 00:11:04.985 }, 00:11:04.985 { 00:11:04.985 "name": "BaseBdev2", 00:11:04.985 "uuid": "dade8bfe-3f44-4010-909d-8d7afd215607", 00:11:04.985 "is_configured": true, 00:11:04.985 "data_offset": 2048, 00:11:04.985 "data_size": 63488 00:11:04.985 } 00:11:04.985 ] 00:11:04.985 }' 00:11:04.985 07:57:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:04.985 07:57:10 -- common/autotest_common.sh@10 -- # set +x 00:11:05.552 07:57:11 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:05.810 [2024-07-13 07:57:11.416263] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:05.810 [2024-07-13 07:57:11.416296] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.810 [2024-07-13 07:57:11.416347] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.810 07:57:11 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:11:05.810 07:57:11 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:11:05.810 07:57:11 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:11:05.810 07:57:11 -- bdev/bdev_raid.sh@197 -- # return 1 00:11:05.810 07:57:11 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:11:05.810 07:57:11 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:11:05.810 07:57:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:05.810 07:57:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:11:05.810 07:57:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:05.810 07:57:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:05.810 07:57:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:11:05.810 07:57:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:05.810 07:57:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:05.810 07:57:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:05.810 07:57:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:05.810 07:57:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:05.810 07:57:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.810 07:57:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:05.810 "name": "Existed_Raid", 00:11:05.810 "uuid": "df5a34eb-9caa-4db3-aae7-fa9dc3d8f835", 00:11:05.810 "strip_size_kb": 64, 00:11:05.810 "state": "offline", 00:11:05.810 "raid_level": "concat", 00:11:05.810 "superblock": true, 00:11:05.810 "num_base_bdevs": 2, 00:11:05.810 "num_base_bdevs_discovered": 1, 00:11:05.810 "num_base_bdevs_operational": 1, 00:11:05.810 "base_bdevs_list": [ 00:11:05.810 { 00:11:05.810 "name": null, 00:11:05.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.810 "is_configured": false, 00:11:05.810 "data_offset": 2048, 00:11:05.810 "data_size": 63488 00:11:05.810 }, 00:11:05.810 { 00:11:05.810 "name": "BaseBdev2", 00:11:05.810 "uuid": "dade8bfe-3f44-4010-909d-8d7afd215607", 00:11:05.810 "is_configured": true, 00:11:05.810 "data_offset": 2048, 00:11:05.810 "data_size": 63488 00:11:05.810 } 00:11:05.810 ] 00:11:05.810 }' 00:11:05.810 07:57:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:05.810 07:57:11 -- common/autotest_common.sh@10 -- # set +x 00:11:06.377 07:57:12 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:11:06.377 07:57:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:06.377 07:57:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:06.377 07:57:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:06.636 07:57:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:06.636 07:57:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.636 07:57:12 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:06.895 [2024-07-13 07:57:12.601916] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:06.895 [2024-07-13 07:57:12.601982] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027680 name Existed_Raid, state offline 00:11:06.895 07:57:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:06.895 07:57:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:06.895 07:57:12 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:11:06.895 07:57:12 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:07.158 07:57:12 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:11:07.158 07:57:12 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:11:07.158 07:57:12 -- bdev/bdev_raid.sh@287 -- # killprocess 58666 00:11:07.158 07:57:12 -- common/autotest_common.sh@926 -- # '[' -z 58666 ']' 00:11:07.158 07:57:12 -- common/autotest_common.sh@930 -- # kill -0 58666 00:11:07.158 07:57:12 -- common/autotest_common.sh@931 -- # uname 00:11:07.158 07:57:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:07.158 07:57:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58666 00:11:07.158 07:57:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:07.158 07:57:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:07.158 killing process with pid 58666 00:11:07.158 07:57:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58666' 00:11:07.158 07:57:12 -- common/autotest_common.sh@945 -- # kill 58666 00:11:07.158 07:57:12 -- common/autotest_common.sh@950 -- # wait 58666 00:11:07.158 [2024-07-13 07:57:12.824220] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:07.158 [2024-07-13 07:57:12.824290] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:07.417 ************************************ 00:11:07.417 END TEST raid_state_function_test_sb 00:11:07.417 ************************************ 00:11:07.418 07:57:13 -- bdev/bdev_raid.sh@289 -- # return 0 00:11:07.418 00:11:07.418 real 0m8.938s 00:11:07.418 user 0m16.349s 00:11:07.418 sys 0m1.118s 00:11:07.418 07:57:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:07.418 07:57:13 -- common/autotest_common.sh@10 -- # set +x 00:11:07.418 07:57:13 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:11:07.418 07:57:13 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:07.418 07:57:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:07.418 07:57:13 -- common/autotest_common.sh@10 -- # set +x 00:11:07.418 ************************************ 00:11:07.418 START TEST raid_superblock_test 00:11:07.418 ************************************ 00:11:07.418 07:57:13 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 2 00:11:07.418 07:57:13 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:11:07.418 07:57:13 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:11:07.418 07:57:13 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:11:07.418 07:57:13 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:11:07.418 07:57:13 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:11:07.418 07:57:13 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:11:07.418 07:57:13 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:11:07.418 07:57:13 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:11:07.418 07:57:13 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:11:07.418 07:57:13 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:11:07.418 07:57:13 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:11:07.418 07:57:13 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:11:07.418 07:57:13 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:11:07.418 07:57:13 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:11:07.418 07:57:13 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:11:07.418 07:57:13 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:11:07.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:07.418 07:57:13 -- bdev/bdev_raid.sh@357 -- # raid_pid=58987 00:11:07.418 07:57:13 -- bdev/bdev_raid.sh@358 -- # waitforlisten 58987 /var/tmp/spdk-raid.sock 00:11:07.418 07:57:13 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:07.418 07:57:13 -- common/autotest_common.sh@819 -- # '[' -z 58987 ']' 00:11:07.418 07:57:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:07.418 07:57:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:07.418 07:57:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:07.418 07:57:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:07.418 07:57:13 -- common/autotest_common.sh@10 -- # set +x 00:11:07.418 [2024-07-13 07:57:13.225452] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:07.418 [2024-07-13 07:57:13.225732] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58987 ] 00:11:07.675 [2024-07-13 07:57:13.373971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.675 [2024-07-13 07:57:13.426465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.675 [2024-07-13 07:57:13.476214] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.241 07:57:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:08.241 07:57:14 -- common/autotest_common.sh@852 -- # return 0 00:11:08.241 07:57:14 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:11:08.241 07:57:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:08.241 07:57:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:11:08.241 07:57:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:11:08.242 07:57:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:08.242 07:57:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:08.242 07:57:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:11:08.242 07:57:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:08.242 07:57:14 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:08.500 malloc1 00:11:08.500 07:57:14 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:08.758 [2024-07-13 07:57:14.351856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:08.758 [2024-07-13 07:57:14.351940] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.758 [2024-07-13 07:57:14.351989] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000026180 00:11:08.758 [2024-07-13 07:57:14.352028] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.758 [2024-07-13 07:57:14.353875] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.758 [2024-07-13 07:57:14.353919] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:08.758 pt1 00:11:08.758 07:57:14 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:11:08.758 07:57:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:08.758 07:57:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:11:08.758 07:57:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:11:08.758 07:57:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:08.758 07:57:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:08.758 07:57:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:11:08.758 07:57:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:08.758 07:57:14 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:09.017 malloc2 00:11:09.017 07:57:14 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:09.017 [2024-07-13 07:57:14.801297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:09.017 [2024-07-13 07:57:14.801367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.017 [2024-07-13 07:57:14.801411] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027f80 00:11:09.017 [2024-07-13 07:57:14.801444] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.017 [2024-07-13 07:57:14.803329] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.017 [2024-07-13 07:57:14.803370] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:09.017 pt2 00:11:09.017 07:57:14 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:11:09.017 07:57:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:09.017 07:57:14 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:11:09.276 [2024-07-13 07:57:14.973433] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:09.276 [2024-07-13 07:57:14.975117] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:09.276 [2024-07-13 07:57:14.975240] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000029480 00:11:09.276 [2024-07-13 07:57:14.975253] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:09.276 [2024-07-13 07:57:14.975362] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:11:09.276 [2024-07-13 07:57:14.975577] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000029480 00:11:09.276 [2024-07-13 07:57:14.975589] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000029480 00:11:09.276 [2024-07-13 07:57:14.975670] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.276 07:57:14 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:09.276 07:57:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:09.276 07:57:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:09.276 07:57:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:09.276 07:57:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:09.276 07:57:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:09.276 07:57:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:09.276 07:57:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:09.276 07:57:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:09.276 07:57:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:09.276 07:57:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:09.276 07:57:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.534 07:57:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:09.534 "name": "raid_bdev1", 00:11:09.534 "uuid": "0aaac0f5-dd0a-4b24-b831-2d262fce05af", 00:11:09.534 "strip_size_kb": 64, 00:11:09.534 "state": "online", 00:11:09.534 "raid_level": "concat", 00:11:09.534 "superblock": true, 00:11:09.534 "num_base_bdevs": 2, 00:11:09.534 "num_base_bdevs_discovered": 2, 00:11:09.534 "num_base_bdevs_operational": 2, 00:11:09.534 "base_bdevs_list": [ 00:11:09.534 { 00:11:09.534 "name": "pt1", 00:11:09.534 "uuid": "b72c60e2-e8a0-5f50-9b01-77420c10e208", 00:11:09.534 "is_configured": true, 00:11:09.534 "data_offset": 2048, 00:11:09.534 "data_size": 63488 00:11:09.534 }, 00:11:09.534 { 00:11:09.534 "name": "pt2", 00:11:09.534 "uuid": "3579ced6-1df5-5506-ae59-4093a29f4799", 00:11:09.534 "is_configured": true, 00:11:09.534 "data_offset": 2048, 00:11:09.534 "data_size": 63488 00:11:09.534 } 00:11:09.534 ] 00:11:09.534 }' 00:11:09.534 07:57:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:09.534 07:57:15 -- common/autotest_common.sh@10 -- # set +x 00:11:10.101 07:57:15 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:10.101 07:57:15 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:11:10.359 [2024-07-13 07:57:15.993633] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.359 07:57:16 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=0aaac0f5-dd0a-4b24-b831-2d262fce05af 00:11:10.359 07:57:16 -- bdev/bdev_raid.sh@380 -- # '[' -z 0aaac0f5-dd0a-4b24-b831-2d262fce05af ']' 00:11:10.359 07:57:16 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:10.359 [2024-07-13 07:57:16.165551] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.359 [2024-07-13 07:57:16.165582] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.359 [2024-07-13 07:57:16.165665] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.359 [2024-07-13 07:57:16.165697] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.359 [2024-07-13 07:57:16.165707] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000029480 name raid_bdev1, state offline 00:11:10.623 07:57:16 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:10.623 07:57:16 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:11:10.623 07:57:16 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:11:10.623 07:57:16 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:11:10.623 07:57:16 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:11:10.623 07:57:16 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:10.881 07:57:16 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:11:10.881 07:57:16 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:11.139 07:57:16 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:11.139 07:57:16 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:11.396 07:57:16 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:11:11.396 07:57:16 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:11:11.396 07:57:16 -- common/autotest_common.sh@640 -- # local es=0 00:11:11.396 07:57:16 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:11:11.396 07:57:16 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:11.396 07:57:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:11.396 07:57:16 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:11.396 07:57:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:11.396 07:57:16 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:11.396 07:57:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:11.396 07:57:16 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:11.396 07:57:16 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:11.396 07:57:16 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:11:11.396 [2024-07-13 07:57:17.153715] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:11.396 [2024-07-13 07:57:17.155408] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:11.396 [2024-07-13 07:57:17.155456] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:11:11.396 [2024-07-13 07:57:17.155526] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:11:11.396 [2024-07-13 07:57:17.155558] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:11.396 [2024-07-13 07:57:17.155569] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000029a80 name raid_bdev1, state configuring 00:11:11.396 request: 00:11:11.396 { 00:11:11.396 "name": "raid_bdev1", 00:11:11.396 "raid_level": "concat", 00:11:11.396 "base_bdevs": [ 00:11:11.396 "malloc1", 00:11:11.396 "malloc2" 00:11:11.396 ], 00:11:11.396 "superblock": false, 00:11:11.396 "strip_size_kb": 64, 00:11:11.396 "method": "bdev_raid_create", 00:11:11.396 "req_id": 1 00:11:11.396 } 00:11:11.396 Got JSON-RPC error response 00:11:11.396 response: 00:11:11.396 { 00:11:11.396 "code": -17, 00:11:11.396 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:11.396 } 00:11:11.396 07:57:17 -- common/autotest_common.sh@643 -- # es=1 00:11:11.397 07:57:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:11.397 07:57:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:11.397 07:57:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:11.397 07:57:17 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:11:11.397 07:57:17 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:11.654 07:57:17 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:11:11.654 07:57:17 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:11:11.654 07:57:17 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:11.912 [2024-07-13 07:57:17.638684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:11.912 [2024-07-13 07:57:17.638785] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.912 [2024-07-13 07:57:17.638820] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002a980 00:11:11.912 [2024-07-13 07:57:17.638859] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.912 [2024-07-13 07:57:17.640691] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.912 [2024-07-13 07:57:17.640742] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:11.912 [2024-07-13 07:57:17.640812] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:11:11.912 [2024-07-13 07:57:17.640859] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:11.912 pt1 00:11:11.912 07:57:17 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:11:11.912 07:57:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:11.912 07:57:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:11.912 07:57:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:11.912 07:57:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:11.912 07:57:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:11.912 07:57:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:11.912 07:57:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:11.912 07:57:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:11.912 07:57:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:11.912 07:57:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:11.912 07:57:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.170 07:57:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:12.170 "name": "raid_bdev1", 00:11:12.170 "uuid": "0aaac0f5-dd0a-4b24-b831-2d262fce05af", 00:11:12.170 "strip_size_kb": 64, 00:11:12.170 "state": "configuring", 00:11:12.170 "raid_level": "concat", 00:11:12.170 "superblock": true, 00:11:12.170 "num_base_bdevs": 2, 00:11:12.170 "num_base_bdevs_discovered": 1, 00:11:12.170 "num_base_bdevs_operational": 2, 00:11:12.170 "base_bdevs_list": [ 00:11:12.170 { 00:11:12.170 "name": "pt1", 00:11:12.170 "uuid": "b72c60e2-e8a0-5f50-9b01-77420c10e208", 00:11:12.170 "is_configured": true, 00:11:12.170 "data_offset": 2048, 00:11:12.170 "data_size": 63488 00:11:12.170 }, 00:11:12.170 { 00:11:12.170 "name": null, 00:11:12.170 "uuid": "3579ced6-1df5-5506-ae59-4093a29f4799", 00:11:12.170 "is_configured": false, 00:11:12.170 "data_offset": 2048, 00:11:12.170 "data_size": 63488 00:11:12.170 } 00:11:12.170 ] 00:11:12.170 }' 00:11:12.170 07:57:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:12.170 07:57:17 -- common/autotest_common.sh@10 -- # set +x 00:11:12.736 07:57:18 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:11:12.736 07:57:18 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:11:12.736 07:57:18 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:11:12.736 07:57:18 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:12.995 [2024-07-13 07:57:18.606869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:12.995 [2024-07-13 07:57:18.606980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.995 [2024-07-13 07:57:18.607024] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002c480 00:11:12.995 [2024-07-13 07:57:18.607052] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.995 [2024-07-13 07:57:18.607356] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.995 [2024-07-13 07:57:18.607396] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:12.995 [2024-07-13 07:57:18.607452] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:11:12.995 [2024-07-13 07:57:18.607658] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:12.995 [2024-07-13 07:57:18.607743] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002be80 00:11:12.995 [2024-07-13 07:57:18.607754] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:12.995 [2024-07-13 07:57:18.607814] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:11:12.995 [2024-07-13 07:57:18.607991] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002be80 00:11:12.995 [2024-07-13 07:57:18.608002] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002be80 00:11:12.995 [2024-07-13 07:57:18.608058] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.995 pt2 00:11:12.995 07:57:18 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:11:12.995 07:57:18 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:11:12.995 07:57:18 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:12.995 07:57:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:12.995 07:57:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:12.995 07:57:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:12.995 07:57:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:12.995 07:57:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:12.995 07:57:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:12.995 07:57:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:12.995 07:57:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:12.995 07:57:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:12.995 07:57:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:12.995 07:57:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.254 07:57:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:13.254 "name": "raid_bdev1", 00:11:13.254 "uuid": "0aaac0f5-dd0a-4b24-b831-2d262fce05af", 00:11:13.254 "strip_size_kb": 64, 00:11:13.254 "state": "online", 00:11:13.254 "raid_level": "concat", 00:11:13.254 "superblock": true, 00:11:13.254 "num_base_bdevs": 2, 00:11:13.254 "num_base_bdevs_discovered": 2, 00:11:13.254 "num_base_bdevs_operational": 2, 00:11:13.254 "base_bdevs_list": [ 00:11:13.254 { 00:11:13.254 "name": "pt1", 00:11:13.254 "uuid": "b72c60e2-e8a0-5f50-9b01-77420c10e208", 00:11:13.254 "is_configured": true, 00:11:13.254 "data_offset": 2048, 00:11:13.254 "data_size": 63488 00:11:13.254 }, 00:11:13.254 { 00:11:13.254 "name": "pt2", 00:11:13.254 "uuid": "3579ced6-1df5-5506-ae59-4093a29f4799", 00:11:13.254 "is_configured": true, 00:11:13.254 "data_offset": 2048, 00:11:13.254 "data_size": 63488 00:11:13.254 } 00:11:13.254 ] 00:11:13.254 }' 00:11:13.254 07:57:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:13.254 07:57:18 -- common/autotest_common.sh@10 -- # set +x 00:11:13.868 07:57:19 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:13.868 07:57:19 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:11:13.868 [2024-07-13 07:57:19.671214] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.141 07:57:19 -- bdev/bdev_raid.sh@430 -- # '[' 0aaac0f5-dd0a-4b24-b831-2d262fce05af '!=' 0aaac0f5-dd0a-4b24-b831-2d262fce05af ']' 00:11:14.141 07:57:19 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:11:14.141 07:57:19 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:11:14.141 07:57:19 -- bdev/bdev_raid.sh@197 -- # return 1 00:11:14.141 07:57:19 -- bdev/bdev_raid.sh@511 -- # killprocess 58987 00:11:14.141 07:57:19 -- common/autotest_common.sh@926 -- # '[' -z 58987 ']' 00:11:14.141 07:57:19 -- common/autotest_common.sh@930 -- # kill -0 58987 00:11:14.141 07:57:19 -- common/autotest_common.sh@931 -- # uname 00:11:14.141 07:57:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:14.141 07:57:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58987 00:11:14.141 killing process with pid 58987 00:11:14.141 07:57:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:14.141 07:57:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:14.141 07:57:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58987' 00:11:14.141 07:57:19 -- common/autotest_common.sh@945 -- # kill 58987 00:11:14.141 07:57:19 -- common/autotest_common.sh@950 -- # wait 58987 00:11:14.141 [2024-07-13 07:57:19.717125] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:14.141 [2024-07-13 07:57:19.717192] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.141 [2024-07-13 07:57:19.717224] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.141 [2024-07-13 07:57:19.717234] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002be80 name raid_bdev1, state offline 00:11:14.141 [2024-07-13 07:57:19.738485] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:14.141 07:57:19 -- bdev/bdev_raid.sh@513 -- # return 0 00:11:14.141 00:11:14.141 real 0m6.855s 00:11:14.141 user 0m12.353s 00:11:14.141 sys 0m0.933s 00:11:14.141 07:57:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.141 07:57:19 -- common/autotest_common.sh@10 -- # set +x 00:11:14.141 ************************************ 00:11:14.141 END TEST raid_superblock_test 00:11:14.141 ************************************ 00:11:14.398 07:57:19 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:11:14.398 07:57:19 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:11:14.398 07:57:19 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:11:14.398 07:57:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:14.398 07:57:19 -- common/autotest_common.sh@10 -- # set +x 00:11:14.398 ************************************ 00:11:14.398 START TEST raid_state_function_test 00:11:14.398 ************************************ 00:11:14.398 07:57:19 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 false 00:11:14.398 07:57:19 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:11:14.398 07:57:19 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:11:14.399 07:57:19 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:11:14.399 07:57:19 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:11:14.399 07:57:19 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:11:14.399 07:57:19 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:11:14.399 07:57:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:14.399 07:57:19 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:11:14.399 07:57:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:14.399 07:57:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:14.399 07:57:19 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:11:14.399 07:57:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:14.399 07:57:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:14.399 Process raid pid: 59220 00:11:14.399 07:57:20 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:11:14.399 07:57:20 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:11:14.399 07:57:20 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:11:14.399 07:57:20 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:11:14.399 07:57:20 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:11:14.399 07:57:20 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:11:14.399 07:57:20 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:11:14.399 07:57:20 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:11:14.399 07:57:20 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:11:14.399 07:57:20 -- bdev/bdev_raid.sh@226 -- # raid_pid=59220 00:11:14.399 07:57:20 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 59220' 00:11:14.399 07:57:20 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:14.399 07:57:20 -- bdev/bdev_raid.sh@228 -- # waitforlisten 59220 /var/tmp/spdk-raid.sock 00:11:14.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:14.399 07:57:20 -- common/autotest_common.sh@819 -- # '[' -z 59220 ']' 00:11:14.399 07:57:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:14.399 07:57:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:14.399 07:57:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:14.399 07:57:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:14.399 07:57:20 -- common/autotest_common.sh@10 -- # set +x 00:11:14.399 [2024-07-13 07:57:20.134820] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:14.399 [2024-07-13 07:57:20.135031] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.657 [2024-07-13 07:57:20.279969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.657 [2024-07-13 07:57:20.328240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.657 [2024-07-13 07:57:20.375735] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.223 07:57:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:15.223 07:57:20 -- common/autotest_common.sh@852 -- # return 0 00:11:15.223 07:57:20 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:15.481 [2024-07-13 07:57:21.163683] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:15.481 [2024-07-13 07:57:21.163757] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:15.481 [2024-07-13 07:57:21.163770] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:15.481 [2024-07-13 07:57:21.163793] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:15.481 07:57:21 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:15.481 07:57:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:15.481 07:57:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:15.481 07:57:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:15.481 07:57:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:15.481 07:57:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:15.481 07:57:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:15.481 07:57:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:15.481 07:57:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:15.481 07:57:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:15.481 07:57:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.481 07:57:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:15.739 07:57:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:15.739 "name": "Existed_Raid", 00:11:15.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.739 "strip_size_kb": 0, 00:11:15.739 "state": "configuring", 00:11:15.739 "raid_level": "raid1", 00:11:15.739 "superblock": false, 00:11:15.739 "num_base_bdevs": 2, 00:11:15.739 "num_base_bdevs_discovered": 0, 00:11:15.739 "num_base_bdevs_operational": 2, 00:11:15.739 "base_bdevs_list": [ 00:11:15.739 { 00:11:15.739 "name": "BaseBdev1", 00:11:15.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.739 "is_configured": false, 00:11:15.739 "data_offset": 0, 00:11:15.739 "data_size": 0 00:11:15.739 }, 00:11:15.739 { 00:11:15.739 "name": "BaseBdev2", 00:11:15.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.739 "is_configured": false, 00:11:15.739 "data_offset": 0, 00:11:15.739 "data_size": 0 00:11:15.739 } 00:11:15.739 ] 00:11:15.739 }' 00:11:15.739 07:57:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:15.739 07:57:21 -- common/autotest_common.sh@10 -- # set +x 00:11:16.305 07:57:22 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:16.563 [2024-07-13 07:57:22.259762] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:16.563 [2024-07-13 07:57:22.259803] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000025880 name Existed_Raid, state configuring 00:11:16.563 07:57:22 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:16.821 [2024-07-13 07:57:22.487833] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:16.821 [2024-07-13 07:57:22.487918] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:16.821 [2024-07-13 07:57:22.487931] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:16.821 [2024-07-13 07:57:22.487957] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:16.821 07:57:22 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:17.079 BaseBdev1 00:11:17.079 [2024-07-13 07:57:22.674100] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.079 07:57:22 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:11:17.079 07:57:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:11:17.079 07:57:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:17.079 07:57:22 -- common/autotest_common.sh@889 -- # local i 00:11:17.079 07:57:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:17.079 07:57:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:17.079 07:57:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:17.079 07:57:22 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:17.335 [ 00:11:17.336 { 00:11:17.336 "name": "BaseBdev1", 00:11:17.336 "aliases": [ 00:11:17.336 "7d3243d6-7374-4fdf-b9a6-4747eeceb0d3" 00:11:17.336 ], 00:11:17.336 "product_name": "Malloc disk", 00:11:17.336 "block_size": 512, 00:11:17.336 "num_blocks": 65536, 00:11:17.336 "uuid": "7d3243d6-7374-4fdf-b9a6-4747eeceb0d3", 00:11:17.336 "assigned_rate_limits": { 00:11:17.336 "rw_ios_per_sec": 0, 00:11:17.336 "rw_mbytes_per_sec": 0, 00:11:17.336 "r_mbytes_per_sec": 0, 00:11:17.336 "w_mbytes_per_sec": 0 00:11:17.336 }, 00:11:17.336 "claimed": true, 00:11:17.336 "claim_type": "exclusive_write", 00:11:17.336 "zoned": false, 00:11:17.336 "supported_io_types": { 00:11:17.336 "read": true, 00:11:17.336 "write": true, 00:11:17.336 "unmap": true, 00:11:17.336 "write_zeroes": true, 00:11:17.336 "flush": true, 00:11:17.336 "reset": true, 00:11:17.336 "compare": false, 00:11:17.336 "compare_and_write": false, 00:11:17.336 "abort": true, 00:11:17.336 "nvme_admin": false, 00:11:17.336 "nvme_io": false 00:11:17.336 }, 00:11:17.336 "memory_domains": [ 00:11:17.336 { 00:11:17.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.336 "dma_device_type": 2 00:11:17.336 } 00:11:17.336 ], 00:11:17.336 "driver_specific": {} 00:11:17.336 } 00:11:17.336 ] 00:11:17.336 07:57:23 -- common/autotest_common.sh@895 -- # return 0 00:11:17.336 07:57:23 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:17.336 07:57:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:17.336 07:57:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:17.336 07:57:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:17.336 07:57:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:17.336 07:57:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:17.336 07:57:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:17.336 07:57:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:17.336 07:57:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:17.336 07:57:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:17.336 07:57:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.336 07:57:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:17.592 07:57:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:17.592 "name": "Existed_Raid", 00:11:17.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.592 "strip_size_kb": 0, 00:11:17.592 "state": "configuring", 00:11:17.592 "raid_level": "raid1", 00:11:17.592 "superblock": false, 00:11:17.592 "num_base_bdevs": 2, 00:11:17.592 "num_base_bdevs_discovered": 1, 00:11:17.592 "num_base_bdevs_operational": 2, 00:11:17.592 "base_bdevs_list": [ 00:11:17.592 { 00:11:17.592 "name": "BaseBdev1", 00:11:17.592 "uuid": "7d3243d6-7374-4fdf-b9a6-4747eeceb0d3", 00:11:17.592 "is_configured": true, 00:11:17.592 "data_offset": 0, 00:11:17.592 "data_size": 65536 00:11:17.592 }, 00:11:17.592 { 00:11:17.592 "name": "BaseBdev2", 00:11:17.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.592 "is_configured": false, 00:11:17.592 "data_offset": 0, 00:11:17.592 "data_size": 0 00:11:17.592 } 00:11:17.592 ] 00:11:17.592 }' 00:11:17.592 07:57:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:17.592 07:57:23 -- common/autotest_common.sh@10 -- # set +x 00:11:18.155 07:57:23 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:18.413 [2024-07-13 07:57:24.086432] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:18.413 [2024-07-13 07:57:24.086691] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026180 name Existed_Raid, state configuring 00:11:18.413 07:57:24 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:11:18.413 07:57:24 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:18.672 [2024-07-13 07:57:24.270511] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:18.672 [2024-07-13 07:57:24.273122] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:18.672 [2024-07-13 07:57:24.273286] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:18.672 07:57:24 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:11:18.672 07:57:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:18.672 07:57:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:18.672 07:57:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:18.672 07:57:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:18.672 07:57:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:18.672 07:57:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:18.672 07:57:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:18.672 07:57:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:18.672 07:57:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:18.672 07:57:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:18.672 07:57:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:18.672 07:57:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:18.672 07:57:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.672 07:57:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:18.672 "name": "Existed_Raid", 00:11:18.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.672 "strip_size_kb": 0, 00:11:18.672 "state": "configuring", 00:11:18.672 "raid_level": "raid1", 00:11:18.672 "superblock": false, 00:11:18.672 "num_base_bdevs": 2, 00:11:18.672 "num_base_bdevs_discovered": 1, 00:11:18.672 "num_base_bdevs_operational": 2, 00:11:18.672 "base_bdevs_list": [ 00:11:18.672 { 00:11:18.672 "name": "BaseBdev1", 00:11:18.672 "uuid": "7d3243d6-7374-4fdf-b9a6-4747eeceb0d3", 00:11:18.672 "is_configured": true, 00:11:18.672 "data_offset": 0, 00:11:18.672 "data_size": 65536 00:11:18.672 }, 00:11:18.672 { 00:11:18.672 "name": "BaseBdev2", 00:11:18.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.672 "is_configured": false, 00:11:18.672 "data_offset": 0, 00:11:18.672 "data_size": 0 00:11:18.672 } 00:11:18.672 ] 00:11:18.672 }' 00:11:18.672 07:57:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:18.672 07:57:24 -- common/autotest_common.sh@10 -- # set +x 00:11:19.608 07:57:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:19.866 [2024-07-13 07:57:25.458679] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:19.866 [2024-07-13 07:57:25.458730] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000027080 00:11:19.866 [2024-07-13 07:57:25.458741] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:19.866 [2024-07-13 07:57:25.458827] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001eb0 00:11:19.866 [2024-07-13 07:57:25.459035] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000027080 00:11:19.866 [2024-07-13 07:57:25.459046] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000027080 00:11:19.866 [2024-07-13 07:57:25.459185] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.866 BaseBdev2 00:11:19.866 07:57:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:11:19.866 07:57:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:11:19.866 07:57:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:19.866 07:57:25 -- common/autotest_common.sh@889 -- # local i 00:11:19.867 07:57:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:19.867 07:57:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:19.867 07:57:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:19.867 07:57:25 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:20.125 [ 00:11:20.125 { 00:11:20.125 "name": "BaseBdev2", 00:11:20.125 "aliases": [ 00:11:20.125 "7d894495-795c-4832-937a-70dff8ac536e" 00:11:20.125 ], 00:11:20.125 "product_name": "Malloc disk", 00:11:20.125 "block_size": 512, 00:11:20.125 "num_blocks": 65536, 00:11:20.125 "uuid": "7d894495-795c-4832-937a-70dff8ac536e", 00:11:20.125 "assigned_rate_limits": { 00:11:20.125 "rw_ios_per_sec": 0, 00:11:20.125 "rw_mbytes_per_sec": 0, 00:11:20.125 "r_mbytes_per_sec": 0, 00:11:20.125 "w_mbytes_per_sec": 0 00:11:20.125 }, 00:11:20.125 "claimed": true, 00:11:20.125 "claim_type": "exclusive_write", 00:11:20.125 "zoned": false, 00:11:20.125 "supported_io_types": { 00:11:20.125 "read": true, 00:11:20.125 "write": true, 00:11:20.125 "unmap": true, 00:11:20.125 "write_zeroes": true, 00:11:20.125 "flush": true, 00:11:20.125 "reset": true, 00:11:20.125 "compare": false, 00:11:20.125 "compare_and_write": false, 00:11:20.125 "abort": true, 00:11:20.125 "nvme_admin": false, 00:11:20.125 "nvme_io": false 00:11:20.125 }, 00:11:20.125 "memory_domains": [ 00:11:20.125 { 00:11:20.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.125 "dma_device_type": 2 00:11:20.125 } 00:11:20.125 ], 00:11:20.125 "driver_specific": {} 00:11:20.125 } 00:11:20.125 ] 00:11:20.125 07:57:25 -- common/autotest_common.sh@895 -- # return 0 00:11:20.125 07:57:25 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:20.125 07:57:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:20.125 07:57:25 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:20.125 07:57:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:20.125 07:57:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:20.125 07:57:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:20.125 07:57:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:20.125 07:57:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:20.125 07:57:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:20.125 07:57:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:20.125 07:57:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:20.125 07:57:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:20.125 07:57:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:20.125 07:57:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.383 07:57:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:20.383 "name": "Existed_Raid", 00:11:20.383 "uuid": "5ad7690f-c4f4-4212-a3d0-1fa841158d6e", 00:11:20.383 "strip_size_kb": 0, 00:11:20.383 "state": "online", 00:11:20.383 "raid_level": "raid1", 00:11:20.383 "superblock": false, 00:11:20.383 "num_base_bdevs": 2, 00:11:20.383 "num_base_bdevs_discovered": 2, 00:11:20.383 "num_base_bdevs_operational": 2, 00:11:20.383 "base_bdevs_list": [ 00:11:20.383 { 00:11:20.383 "name": "BaseBdev1", 00:11:20.383 "uuid": "7d3243d6-7374-4fdf-b9a6-4747eeceb0d3", 00:11:20.383 "is_configured": true, 00:11:20.383 "data_offset": 0, 00:11:20.383 "data_size": 65536 00:11:20.383 }, 00:11:20.383 { 00:11:20.383 "name": "BaseBdev2", 00:11:20.383 "uuid": "7d894495-795c-4832-937a-70dff8ac536e", 00:11:20.383 "is_configured": true, 00:11:20.383 "data_offset": 0, 00:11:20.383 "data_size": 65536 00:11:20.383 } 00:11:20.383 ] 00:11:20.383 }' 00:11:20.383 07:57:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:20.383 07:57:25 -- common/autotest_common.sh@10 -- # set +x 00:11:20.948 07:57:26 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:21.207 [2024-07-13 07:57:26.794997] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:21.207 07:57:26 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:11:21.207 07:57:26 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:11:21.207 07:57:26 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:11:21.207 07:57:26 -- bdev/bdev_raid.sh@196 -- # return 0 00:11:21.207 07:57:26 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:11:21.207 07:57:26 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:21.207 07:57:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:21.207 07:57:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:21.207 07:57:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:21.207 07:57:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:21.207 07:57:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:11:21.207 07:57:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:21.207 07:57:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:21.207 07:57:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:21.207 07:57:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:21.207 07:57:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:21.207 07:57:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.465 07:57:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:21.465 "name": "Existed_Raid", 00:11:21.465 "uuid": "5ad7690f-c4f4-4212-a3d0-1fa841158d6e", 00:11:21.465 "strip_size_kb": 0, 00:11:21.465 "state": "online", 00:11:21.465 "raid_level": "raid1", 00:11:21.465 "superblock": false, 00:11:21.465 "num_base_bdevs": 2, 00:11:21.465 "num_base_bdevs_discovered": 1, 00:11:21.465 "num_base_bdevs_operational": 1, 00:11:21.465 "base_bdevs_list": [ 00:11:21.465 { 00:11:21.465 "name": null, 00:11:21.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.465 "is_configured": false, 00:11:21.465 "data_offset": 0, 00:11:21.465 "data_size": 65536 00:11:21.465 }, 00:11:21.465 { 00:11:21.465 "name": "BaseBdev2", 00:11:21.465 "uuid": "7d894495-795c-4832-937a-70dff8ac536e", 00:11:21.465 "is_configured": true, 00:11:21.465 "data_offset": 0, 00:11:21.465 "data_size": 65536 00:11:21.465 } 00:11:21.465 ] 00:11:21.465 }' 00:11:21.465 07:57:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:21.465 07:57:27 -- common/autotest_common.sh@10 -- # set +x 00:11:22.031 07:57:27 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:11:22.031 07:57:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:22.031 07:57:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:22.031 07:57:27 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:22.031 07:57:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:22.031 07:57:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:22.031 07:57:27 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:22.289 [2024-07-13 07:57:28.005596] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:22.289 [2024-07-13 07:57:28.005627] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.289 [2024-07-13 07:57:28.005680] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.289 [2024-07-13 07:57:28.016141] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.289 [2024-07-13 07:57:28.016169] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027080 name Existed_Raid, state offline 00:11:22.289 07:57:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:22.289 07:57:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:22.289 07:57:28 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:22.289 07:57:28 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:11:22.548 07:57:28 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:11:22.548 07:57:28 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:11:22.548 07:57:28 -- bdev/bdev_raid.sh@287 -- # killprocess 59220 00:11:22.548 07:57:28 -- common/autotest_common.sh@926 -- # '[' -z 59220 ']' 00:11:22.548 07:57:28 -- common/autotest_common.sh@930 -- # kill -0 59220 00:11:22.548 07:57:28 -- common/autotest_common.sh@931 -- # uname 00:11:22.548 07:57:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:22.548 07:57:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59220 00:11:22.548 killing process with pid 59220 00:11:22.548 07:57:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:22.548 07:57:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:22.548 07:57:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59220' 00:11:22.548 07:57:28 -- common/autotest_common.sh@945 -- # kill 59220 00:11:22.548 07:57:28 -- common/autotest_common.sh@950 -- # wait 59220 00:11:22.548 [2024-07-13 07:57:28.207051] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:22.548 [2024-07-13 07:57:28.207105] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:22.807 ************************************ 00:11:22.807 END TEST raid_state_function_test 00:11:22.807 ************************************ 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@289 -- # return 0 00:11:22.807 00:11:22.807 real 0m8.406s 00:11:22.807 user 0m15.317s 00:11:22.807 sys 0m1.109s 00:11:22.807 07:57:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.807 07:57:28 -- common/autotest_common.sh@10 -- # set +x 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:11:22.807 07:57:28 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:11:22.807 07:57:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:22.807 07:57:28 -- common/autotest_common.sh@10 -- # set +x 00:11:22.807 ************************************ 00:11:22.807 START TEST raid_state_function_test_sb 00:11:22.807 ************************************ 00:11:22.807 07:57:28 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 true 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:11:22.807 Process raid pid: 59519 00:11:22.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@226 -- # raid_pid=59519 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 59519' 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:22.807 07:57:28 -- bdev/bdev_raid.sh@228 -- # waitforlisten 59519 /var/tmp/spdk-raid.sock 00:11:22.807 07:57:28 -- common/autotest_common.sh@819 -- # '[' -z 59519 ']' 00:11:22.807 07:57:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:22.807 07:57:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:22.807 07:57:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:22.807 07:57:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:22.807 07:57:28 -- common/autotest_common.sh@10 -- # set +x 00:11:22.807 [2024-07-13 07:57:28.600838] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:22.807 [2024-07-13 07:57:28.601061] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.066 [2024-07-13 07:57:28.748903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.066 [2024-07-13 07:57:28.799465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.066 [2024-07-13 07:57:28.849387] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.634 07:57:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:23.634 07:57:29 -- common/autotest_common.sh@852 -- # return 0 00:11:23.634 07:57:29 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:23.893 [2024-07-13 07:57:29.503192] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:23.893 [2024-07-13 07:57:29.503268] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:23.893 [2024-07-13 07:57:29.503279] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:23.893 [2024-07-13 07:57:29.503300] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:23.893 07:57:29 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:23.893 07:57:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:23.893 07:57:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:23.893 07:57:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:23.893 07:57:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:23.893 07:57:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:23.893 07:57:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:23.893 07:57:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:23.893 07:57:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:23.893 07:57:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:23.893 07:57:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.893 07:57:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:24.152 07:57:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:24.152 "name": "Existed_Raid", 00:11:24.152 "uuid": "e0ab925d-5e67-41ae-8be9-bc275b11bfac", 00:11:24.152 "strip_size_kb": 0, 00:11:24.152 "state": "configuring", 00:11:24.152 "raid_level": "raid1", 00:11:24.152 "superblock": true, 00:11:24.152 "num_base_bdevs": 2, 00:11:24.152 "num_base_bdevs_discovered": 0, 00:11:24.152 "num_base_bdevs_operational": 2, 00:11:24.152 "base_bdevs_list": [ 00:11:24.152 { 00:11:24.152 "name": "BaseBdev1", 00:11:24.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.152 "is_configured": false, 00:11:24.152 "data_offset": 0, 00:11:24.152 "data_size": 0 00:11:24.152 }, 00:11:24.152 { 00:11:24.152 "name": "BaseBdev2", 00:11:24.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.152 "is_configured": false, 00:11:24.152 "data_offset": 0, 00:11:24.152 "data_size": 0 00:11:24.152 } 00:11:24.152 ] 00:11:24.152 }' 00:11:24.152 07:57:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:24.152 07:57:29 -- common/autotest_common.sh@10 -- # set +x 00:11:24.719 07:57:30 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:24.719 [2024-07-13 07:57:30.511198] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:24.719 [2024-07-13 07:57:30.511244] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000025880 name Existed_Raid, state configuring 00:11:24.719 07:57:30 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:24.977 [2024-07-13 07:57:30.655269] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:24.977 [2024-07-13 07:57:30.655344] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:24.977 [2024-07-13 07:57:30.655355] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:24.977 [2024-07-13 07:57:30.655379] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:24.977 07:57:30 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:25.235 BaseBdev1 00:11:25.235 [2024-07-13 07:57:30.812408] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.235 07:57:30 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:11:25.235 07:57:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:11:25.235 07:57:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:25.235 07:57:30 -- common/autotest_common.sh@889 -- # local i 00:11:25.235 07:57:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:25.235 07:57:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:25.235 07:57:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:25.235 07:57:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:25.494 [ 00:11:25.494 { 00:11:25.494 "name": "BaseBdev1", 00:11:25.494 "aliases": [ 00:11:25.494 "ecc917bf-e01e-46da-a74e-f1be907cc69f" 00:11:25.494 ], 00:11:25.494 "product_name": "Malloc disk", 00:11:25.494 "block_size": 512, 00:11:25.494 "num_blocks": 65536, 00:11:25.494 "uuid": "ecc917bf-e01e-46da-a74e-f1be907cc69f", 00:11:25.494 "assigned_rate_limits": { 00:11:25.494 "rw_ios_per_sec": 0, 00:11:25.494 "rw_mbytes_per_sec": 0, 00:11:25.494 "r_mbytes_per_sec": 0, 00:11:25.494 "w_mbytes_per_sec": 0 00:11:25.494 }, 00:11:25.494 "claimed": true, 00:11:25.494 "claim_type": "exclusive_write", 00:11:25.494 "zoned": false, 00:11:25.494 "supported_io_types": { 00:11:25.494 "read": true, 00:11:25.494 "write": true, 00:11:25.494 "unmap": true, 00:11:25.494 "write_zeroes": true, 00:11:25.494 "flush": true, 00:11:25.494 "reset": true, 00:11:25.494 "compare": false, 00:11:25.494 "compare_and_write": false, 00:11:25.494 "abort": true, 00:11:25.494 "nvme_admin": false, 00:11:25.494 "nvme_io": false 00:11:25.494 }, 00:11:25.494 "memory_domains": [ 00:11:25.494 { 00:11:25.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.494 "dma_device_type": 2 00:11:25.494 } 00:11:25.494 ], 00:11:25.494 "driver_specific": {} 00:11:25.494 } 00:11:25.494 ] 00:11:25.494 07:57:31 -- common/autotest_common.sh@895 -- # return 0 00:11:25.494 07:57:31 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:25.494 07:57:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:25.494 07:57:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:25.494 07:57:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:25.494 07:57:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:25.494 07:57:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:25.494 07:57:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:25.494 07:57:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:25.494 07:57:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:25.494 07:57:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:25.494 07:57:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.494 07:57:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:25.777 07:57:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:25.777 "name": "Existed_Raid", 00:11:25.777 "uuid": "f860c1c0-0dcc-4777-b89b-e0faab9d9ed0", 00:11:25.777 "strip_size_kb": 0, 00:11:25.777 "state": "configuring", 00:11:25.777 "raid_level": "raid1", 00:11:25.777 "superblock": true, 00:11:25.777 "num_base_bdevs": 2, 00:11:25.777 "num_base_bdevs_discovered": 1, 00:11:25.777 "num_base_bdevs_operational": 2, 00:11:25.777 "base_bdevs_list": [ 00:11:25.777 { 00:11:25.777 "name": "BaseBdev1", 00:11:25.777 "uuid": "ecc917bf-e01e-46da-a74e-f1be907cc69f", 00:11:25.777 "is_configured": true, 00:11:25.777 "data_offset": 2048, 00:11:25.777 "data_size": 63488 00:11:25.777 }, 00:11:25.777 { 00:11:25.777 "name": "BaseBdev2", 00:11:25.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.777 "is_configured": false, 00:11:25.777 "data_offset": 0, 00:11:25.777 "data_size": 0 00:11:25.777 } 00:11:25.777 ] 00:11:25.777 }' 00:11:25.777 07:57:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:25.777 07:57:31 -- common/autotest_common.sh@10 -- # set +x 00:11:26.344 07:57:31 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:26.344 [2024-07-13 07:57:31.988640] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:26.344 [2024-07-13 07:57:31.988711] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026180 name Existed_Raid, state configuring 00:11:26.344 07:57:32 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:11:26.344 07:57:32 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:26.344 07:57:32 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:26.603 BaseBdev1 00:11:26.603 07:57:32 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:11:26.603 07:57:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:11:26.603 07:57:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:26.603 07:57:32 -- common/autotest_common.sh@889 -- # local i 00:11:26.603 07:57:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:26.603 07:57:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:26.603 07:57:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:26.861 07:57:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:26.861 [ 00:11:26.861 { 00:11:26.861 "name": "BaseBdev1", 00:11:26.861 "aliases": [ 00:11:26.861 "9cb8025f-06b8-4778-a600-60088fcf6675" 00:11:26.861 ], 00:11:26.861 "product_name": "Malloc disk", 00:11:26.861 "block_size": 512, 00:11:26.861 "num_blocks": 65536, 00:11:26.861 "uuid": "9cb8025f-06b8-4778-a600-60088fcf6675", 00:11:26.861 "assigned_rate_limits": { 00:11:26.861 "rw_ios_per_sec": 0, 00:11:26.861 "rw_mbytes_per_sec": 0, 00:11:26.861 "r_mbytes_per_sec": 0, 00:11:26.861 "w_mbytes_per_sec": 0 00:11:26.861 }, 00:11:26.861 "claimed": false, 00:11:26.861 "zoned": false, 00:11:26.861 "supported_io_types": { 00:11:26.861 "read": true, 00:11:26.861 "write": true, 00:11:26.861 "unmap": true, 00:11:26.861 "write_zeroes": true, 00:11:26.861 "flush": true, 00:11:26.861 "reset": true, 00:11:26.861 "compare": false, 00:11:26.861 "compare_and_write": false, 00:11:26.861 "abort": true, 00:11:26.861 "nvme_admin": false, 00:11:26.861 "nvme_io": false 00:11:26.861 }, 00:11:26.861 "memory_domains": [ 00:11:26.861 { 00:11:26.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.861 "dma_device_type": 2 00:11:26.861 } 00:11:26.861 ], 00:11:26.861 "driver_specific": {} 00:11:26.861 } 00:11:26.861 ] 00:11:26.861 07:57:32 -- common/autotest_common.sh@895 -- # return 0 00:11:26.861 07:57:32 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:27.120 [2024-07-13 07:57:32.722464] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.120 [2024-07-13 07:57:32.724366] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:27.120 [2024-07-13 07:57:32.724434] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:27.120 07:57:32 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:11:27.120 07:57:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:27.120 07:57:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:27.120 07:57:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:27.120 07:57:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:27.120 07:57:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:27.120 07:57:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:27.120 07:57:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:27.120 07:57:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:27.120 07:57:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:27.120 07:57:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:27.120 07:57:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:27.120 07:57:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.120 07:57:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:27.377 07:57:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:27.377 "name": "Existed_Raid", 00:11:27.377 "uuid": "8b1d4663-ecfb-42f8-b58f-3e46c12f9734", 00:11:27.377 "strip_size_kb": 0, 00:11:27.377 "state": "configuring", 00:11:27.377 "raid_level": "raid1", 00:11:27.377 "superblock": true, 00:11:27.377 "num_base_bdevs": 2, 00:11:27.377 "num_base_bdevs_discovered": 1, 00:11:27.377 "num_base_bdevs_operational": 2, 00:11:27.377 "base_bdevs_list": [ 00:11:27.377 { 00:11:27.377 "name": "BaseBdev1", 00:11:27.377 "uuid": "9cb8025f-06b8-4778-a600-60088fcf6675", 00:11:27.377 "is_configured": true, 00:11:27.377 "data_offset": 2048, 00:11:27.377 "data_size": 63488 00:11:27.377 }, 00:11:27.377 { 00:11:27.377 "name": "BaseBdev2", 00:11:27.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.377 "is_configured": false, 00:11:27.378 "data_offset": 0, 00:11:27.378 "data_size": 0 00:11:27.378 } 00:11:27.378 ] 00:11:27.378 }' 00:11:27.378 07:57:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:27.378 07:57:32 -- common/autotest_common.sh@10 -- # set +x 00:11:27.943 07:57:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:27.943 [2024-07-13 07:57:33.753277] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.943 [2024-07-13 07:57:33.753441] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000027680 00:11:27.943 BaseBdev2 00:11:28.206 [2024-07-13 07:57:33.754084] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:28.207 [2024-07-13 07:57:33.754245] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:11:28.207 [2024-07-13 07:57:33.754513] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000027680 00:11:28.207 [2024-07-13 07:57:33.754525] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000027680 00:11:28.207 [2024-07-13 07:57:33.754638] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.207 07:57:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:11:28.207 07:57:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:11:28.207 07:57:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:28.207 07:57:33 -- common/autotest_common.sh@889 -- # local i 00:11:28.207 07:57:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:28.207 07:57:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:28.207 07:57:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:28.207 07:57:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:28.464 [ 00:11:28.464 { 00:11:28.464 "name": "BaseBdev2", 00:11:28.464 "aliases": [ 00:11:28.464 "babf7f68-0403-4a77-a64b-6bc5e68d6656" 00:11:28.464 ], 00:11:28.464 "product_name": "Malloc disk", 00:11:28.464 "block_size": 512, 00:11:28.464 "num_blocks": 65536, 00:11:28.464 "uuid": "babf7f68-0403-4a77-a64b-6bc5e68d6656", 00:11:28.464 "assigned_rate_limits": { 00:11:28.464 "rw_ios_per_sec": 0, 00:11:28.464 "rw_mbytes_per_sec": 0, 00:11:28.464 "r_mbytes_per_sec": 0, 00:11:28.464 "w_mbytes_per_sec": 0 00:11:28.464 }, 00:11:28.464 "claimed": true, 00:11:28.464 "claim_type": "exclusive_write", 00:11:28.464 "zoned": false, 00:11:28.464 "supported_io_types": { 00:11:28.464 "read": true, 00:11:28.464 "write": true, 00:11:28.464 "unmap": true, 00:11:28.464 "write_zeroes": true, 00:11:28.464 "flush": true, 00:11:28.464 "reset": true, 00:11:28.464 "compare": false, 00:11:28.464 "compare_and_write": false, 00:11:28.464 "abort": true, 00:11:28.464 "nvme_admin": false, 00:11:28.464 "nvme_io": false 00:11:28.464 }, 00:11:28.464 "memory_domains": [ 00:11:28.464 { 00:11:28.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.464 "dma_device_type": 2 00:11:28.464 } 00:11:28.464 ], 00:11:28.464 "driver_specific": {} 00:11:28.464 } 00:11:28.464 ] 00:11:28.464 07:57:34 -- common/autotest_common.sh@895 -- # return 0 00:11:28.464 07:57:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:28.464 07:57:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:28.464 07:57:34 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:28.464 07:57:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:28.464 07:57:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:28.464 07:57:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:28.464 07:57:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:28.464 07:57:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:28.464 07:57:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:28.464 07:57:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:28.464 07:57:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:28.464 07:57:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:28.464 07:57:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:28.464 07:57:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.464 07:57:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:28.464 "name": "Existed_Raid", 00:11:28.464 "uuid": "8b1d4663-ecfb-42f8-b58f-3e46c12f9734", 00:11:28.464 "strip_size_kb": 0, 00:11:28.464 "state": "online", 00:11:28.464 "raid_level": "raid1", 00:11:28.464 "superblock": true, 00:11:28.464 "num_base_bdevs": 2, 00:11:28.464 "num_base_bdevs_discovered": 2, 00:11:28.464 "num_base_bdevs_operational": 2, 00:11:28.464 "base_bdevs_list": [ 00:11:28.464 { 00:11:28.464 "name": "BaseBdev1", 00:11:28.464 "uuid": "9cb8025f-06b8-4778-a600-60088fcf6675", 00:11:28.464 "is_configured": true, 00:11:28.464 "data_offset": 2048, 00:11:28.464 "data_size": 63488 00:11:28.464 }, 00:11:28.464 { 00:11:28.464 "name": "BaseBdev2", 00:11:28.464 "uuid": "babf7f68-0403-4a77-a64b-6bc5e68d6656", 00:11:28.464 "is_configured": true, 00:11:28.464 "data_offset": 2048, 00:11:28.464 "data_size": 63488 00:11:28.464 } 00:11:28.464 ] 00:11:28.464 }' 00:11:28.464 07:57:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:28.464 07:57:34 -- common/autotest_common.sh@10 -- # set +x 00:11:29.029 07:57:34 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:29.288 [2024-07-13 07:57:34.913507] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:29.288 07:57:34 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:11:29.288 07:57:34 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:11:29.288 07:57:34 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:11:29.288 07:57:34 -- bdev/bdev_raid.sh@196 -- # return 0 00:11:29.288 07:57:34 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:11:29.288 07:57:34 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:29.288 07:57:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:29.288 07:57:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:29.288 07:57:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:29.288 07:57:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:29.288 07:57:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:11:29.288 07:57:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:29.288 07:57:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:29.288 07:57:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:29.288 07:57:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:29.288 07:57:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.289 07:57:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:29.546 07:57:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:29.546 "name": "Existed_Raid", 00:11:29.546 "uuid": "8b1d4663-ecfb-42f8-b58f-3e46c12f9734", 00:11:29.547 "strip_size_kb": 0, 00:11:29.547 "state": "online", 00:11:29.547 "raid_level": "raid1", 00:11:29.547 "superblock": true, 00:11:29.547 "num_base_bdevs": 2, 00:11:29.547 "num_base_bdevs_discovered": 1, 00:11:29.547 "num_base_bdevs_operational": 1, 00:11:29.547 "base_bdevs_list": [ 00:11:29.547 { 00:11:29.547 "name": null, 00:11:29.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.547 "is_configured": false, 00:11:29.547 "data_offset": 2048, 00:11:29.547 "data_size": 63488 00:11:29.547 }, 00:11:29.547 { 00:11:29.547 "name": "BaseBdev2", 00:11:29.547 "uuid": "babf7f68-0403-4a77-a64b-6bc5e68d6656", 00:11:29.547 "is_configured": true, 00:11:29.547 "data_offset": 2048, 00:11:29.547 "data_size": 63488 00:11:29.547 } 00:11:29.547 ] 00:11:29.547 }' 00:11:29.547 07:57:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:29.547 07:57:35 -- common/autotest_common.sh@10 -- # set +x 00:11:30.112 07:57:35 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:11:30.112 07:57:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:30.112 07:57:35 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:30.112 07:57:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:30.371 07:57:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:30.371 07:57:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:30.371 07:57:35 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:30.371 [2024-07-13 07:57:36.102947] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:30.371 [2024-07-13 07:57:36.102979] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:30.371 [2024-07-13 07:57:36.103035] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.371 [2024-07-13 07:57:36.113605] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:30.371 [2024-07-13 07:57:36.113633] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027680 name Existed_Raid, state offline 00:11:30.371 07:57:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:30.371 07:57:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:30.371 07:57:36 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:30.371 07:57:36 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:11:30.629 07:57:36 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:11:30.629 07:57:36 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:11:30.629 07:57:36 -- bdev/bdev_raid.sh@287 -- # killprocess 59519 00:11:30.629 07:57:36 -- common/autotest_common.sh@926 -- # '[' -z 59519 ']' 00:11:30.629 07:57:36 -- common/autotest_common.sh@930 -- # kill -0 59519 00:11:30.629 07:57:36 -- common/autotest_common.sh@931 -- # uname 00:11:30.629 07:57:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:30.629 07:57:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59519 00:11:30.629 killing process with pid 59519 00:11:30.629 07:57:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:30.629 07:57:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:30.629 07:57:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59519' 00:11:30.629 07:57:36 -- common/autotest_common.sh@945 -- # kill 59519 00:11:30.629 07:57:36 -- common/autotest_common.sh@950 -- # wait 59519 00:11:30.629 [2024-07-13 07:57:36.352514] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:30.629 [2024-07-13 07:57:36.352571] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:30.887 ************************************ 00:11:30.887 END TEST raid_state_function_test_sb 00:11:30.887 ************************************ 00:11:30.887 07:57:36 -- bdev/bdev_raid.sh@289 -- # return 0 00:11:30.887 00:11:30.887 real 0m8.089s 00:11:30.887 user 0m14.556s 00:11:30.887 sys 0m1.145s 00:11:30.887 07:57:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.887 07:57:36 -- common/autotest_common.sh@10 -- # set +x 00:11:30.887 07:57:36 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:11:30.887 07:57:36 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:30.887 07:57:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:30.887 07:57:36 -- common/autotest_common.sh@10 -- # set +x 00:11:30.887 ************************************ 00:11:30.887 START TEST raid_superblock_test 00:11:30.887 ************************************ 00:11:30.887 07:57:36 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 2 00:11:30.887 07:57:36 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:11:30.887 07:57:36 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:11:30.887 07:57:36 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:11:30.887 07:57:36 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:11:30.887 07:57:36 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:11:30.887 07:57:36 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:11:30.887 07:57:36 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:11:30.887 07:57:36 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:11:30.887 07:57:36 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:11:30.887 07:57:36 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:11:30.887 07:57:36 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:11:30.887 07:57:36 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:11:30.887 07:57:36 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:11:30.887 07:57:36 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:11:30.887 07:57:36 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:11:30.887 07:57:36 -- bdev/bdev_raid.sh@357 -- # raid_pid=59822 00:11:30.887 07:57:36 -- bdev/bdev_raid.sh@358 -- # waitforlisten 59822 /var/tmp/spdk-raid.sock 00:11:30.887 07:57:36 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:30.887 07:57:36 -- common/autotest_common.sh@819 -- # '[' -z 59822 ']' 00:11:30.887 07:57:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:30.887 07:57:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:30.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:30.887 07:57:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:30.887 07:57:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:30.887 07:57:36 -- common/autotest_common.sh@10 -- # set +x 00:11:31.146 [2024-07-13 07:57:36.737856] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:31.146 [2024-07-13 07:57:36.738009] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59822 ] 00:11:31.146 [2024-07-13 07:57:36.869421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.146 [2024-07-13 07:57:36.939444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.404 [2024-07-13 07:57:36.991210] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.972 07:57:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:31.972 07:57:37 -- common/autotest_common.sh@852 -- # return 0 00:11:31.972 07:57:37 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:11:31.972 07:57:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:31.972 07:57:37 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:11:31.972 07:57:37 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:11:31.972 07:57:37 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:31.972 07:57:37 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:31.972 07:57:37 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:11:31.972 07:57:37 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:31.972 07:57:37 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:31.972 malloc1 00:11:31.972 07:57:37 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:32.232 [2024-07-13 07:57:37.895397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:32.232 [2024-07-13 07:57:37.897856] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.232 [2024-07-13 07:57:37.898135] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000026180 00:11:32.232 [2024-07-13 07:57:37.898251] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.232 pt1 00:11:32.232 [2024-07-13 07:57:37.902308] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.232 [2024-07-13 07:57:37.902433] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:32.232 07:57:37 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:11:32.232 07:57:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:32.232 07:57:37 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:11:32.232 07:57:37 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:11:32.232 07:57:37 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:32.232 07:57:37 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:32.232 07:57:37 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:11:32.232 07:57:37 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:32.232 07:57:37 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:32.491 malloc2 00:11:32.491 07:57:38 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:32.750 [2024-07-13 07:57:38.363635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:32.750 [2024-07-13 07:57:38.363707] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.750 [2024-07-13 07:57:38.363754] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027f80 00:11:32.750 [2024-07-13 07:57:38.363793] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.750 [2024-07-13 07:57:38.365514] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.750 [2024-07-13 07:57:38.365560] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:32.750 pt2 00:11:32.750 07:57:38 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:11:32.750 07:57:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:32.750 07:57:38 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:11:32.750 [2024-07-13 07:57:38.523706] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:32.750 [2024-07-13 07:57:38.525161] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:32.750 [2024-07-13 07:57:38.525263] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000029480 00:11:32.750 [2024-07-13 07:57:38.525275] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:32.750 [2024-07-13 07:57:38.525356] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:11:32.750 [2024-07-13 07:57:38.525537] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000029480 00:11:32.750 [2024-07-13 07:57:38.525547] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000029480 00:11:32.750 [2024-07-13 07:57:38.525610] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.750 07:57:38 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:32.750 07:57:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:32.750 07:57:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:32.750 07:57:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:32.750 07:57:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:32.750 07:57:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:32.751 07:57:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:32.751 07:57:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:32.751 07:57:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:32.751 07:57:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:32.751 07:57:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.751 07:57:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:33.009 07:57:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:33.009 "name": "raid_bdev1", 00:11:33.009 "uuid": "e2933055-c732-4f59-a242-0472327bdbf4", 00:11:33.009 "strip_size_kb": 0, 00:11:33.009 "state": "online", 00:11:33.009 "raid_level": "raid1", 00:11:33.009 "superblock": true, 00:11:33.010 "num_base_bdevs": 2, 00:11:33.010 "num_base_bdevs_discovered": 2, 00:11:33.010 "num_base_bdevs_operational": 2, 00:11:33.010 "base_bdevs_list": [ 00:11:33.010 { 00:11:33.010 "name": "pt1", 00:11:33.010 "uuid": "6aa57a3c-96af-5196-952c-ddd190f3348d", 00:11:33.010 "is_configured": true, 00:11:33.010 "data_offset": 2048, 00:11:33.010 "data_size": 63488 00:11:33.010 }, 00:11:33.010 { 00:11:33.010 "name": "pt2", 00:11:33.010 "uuid": "4be05d64-e5ee-5784-8e65-d4aae41a6798", 00:11:33.010 "is_configured": true, 00:11:33.010 "data_offset": 2048, 00:11:33.010 "data_size": 63488 00:11:33.010 } 00:11:33.010 ] 00:11:33.010 }' 00:11:33.010 07:57:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:33.010 07:57:38 -- common/autotest_common.sh@10 -- # set +x 00:11:33.577 07:57:39 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:33.577 07:57:39 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:11:33.577 [2024-07-13 07:57:39.387850] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.835 07:57:39 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=e2933055-c732-4f59-a242-0472327bdbf4 00:11:33.835 07:57:39 -- bdev/bdev_raid.sh@380 -- # '[' -z e2933055-c732-4f59-a242-0472327bdbf4 ']' 00:11:33.835 07:57:39 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:33.835 [2024-07-13 07:57:39.539742] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:33.835 [2024-07-13 07:57:39.539772] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.835 [2024-07-13 07:57:39.539845] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.835 [2024-07-13 07:57:39.539886] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.835 [2024-07-13 07:57:39.539898] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000029480 name raid_bdev1, state offline 00:11:33.835 07:57:39 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:33.835 07:57:39 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:11:34.094 07:57:39 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:11:34.094 07:57:39 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:11:34.094 07:57:39 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:11:34.094 07:57:39 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:34.094 07:57:39 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:11:34.094 07:57:39 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:34.354 07:57:40 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:34.354 07:57:40 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:34.613 07:57:40 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:11:34.613 07:57:40 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:11:34.613 07:57:40 -- common/autotest_common.sh@640 -- # local es=0 00:11:34.613 07:57:40 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:11:34.613 07:57:40 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:34.613 07:57:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:34.613 07:57:40 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:34.613 07:57:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:34.613 07:57:40 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:34.613 07:57:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:34.613 07:57:40 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:34.613 07:57:40 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:34.613 07:57:40 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:11:34.613 [2024-07-13 07:57:40.399841] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:34.613 [2024-07-13 07:57:40.401183] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:34.613 [2024-07-13 07:57:40.401223] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:11:34.613 [2024-07-13 07:57:40.401281] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:11:34.613 [2024-07-13 07:57:40.401307] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:34.613 [2024-07-13 07:57:40.401317] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000029a80 name raid_bdev1, state configuring 00:11:34.613 request: 00:11:34.613 { 00:11:34.613 "name": "raid_bdev1", 00:11:34.613 "raid_level": "raid1", 00:11:34.613 "base_bdevs": [ 00:11:34.613 "malloc1", 00:11:34.613 "malloc2" 00:11:34.613 ], 00:11:34.613 "superblock": false, 00:11:34.613 "method": "bdev_raid_create", 00:11:34.613 "req_id": 1 00:11:34.613 } 00:11:34.613 Got JSON-RPC error response 00:11:34.613 response: 00:11:34.613 { 00:11:34.613 "code": -17, 00:11:34.613 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:34.613 } 00:11:34.613 07:57:40 -- common/autotest_common.sh@643 -- # es=1 00:11:34.613 07:57:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:34.613 07:57:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:34.613 07:57:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:34.613 07:57:40 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:11:34.613 07:57:40 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.872 07:57:40 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:11:34.872 07:57:40 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:11:34.872 07:57:40 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:35.132 [2024-07-13 07:57:40.823859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:35.132 [2024-07-13 07:57:40.823948] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.132 [2024-07-13 07:57:40.823985] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002a980 00:11:35.132 [2024-07-13 07:57:40.824011] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.132 [2024-07-13 07:57:40.825673] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.132 [2024-07-13 07:57:40.825716] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:35.132 [2024-07-13 07:57:40.825782] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:11:35.132 [2024-07-13 07:57:40.825820] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:35.132 pt1 00:11:35.132 07:57:40 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:35.132 07:57:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:35.132 07:57:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:35.132 07:57:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:35.132 07:57:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:35.132 07:57:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:35.132 07:57:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:35.132 07:57:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:35.132 07:57:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:35.132 07:57:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:35.132 07:57:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.132 07:57:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:35.391 07:57:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:35.391 "name": "raid_bdev1", 00:11:35.391 "uuid": "e2933055-c732-4f59-a242-0472327bdbf4", 00:11:35.391 "strip_size_kb": 0, 00:11:35.391 "state": "configuring", 00:11:35.391 "raid_level": "raid1", 00:11:35.391 "superblock": true, 00:11:35.391 "num_base_bdevs": 2, 00:11:35.391 "num_base_bdevs_discovered": 1, 00:11:35.391 "num_base_bdevs_operational": 2, 00:11:35.391 "base_bdevs_list": [ 00:11:35.391 { 00:11:35.391 "name": "pt1", 00:11:35.391 "uuid": "6aa57a3c-96af-5196-952c-ddd190f3348d", 00:11:35.391 "is_configured": true, 00:11:35.391 "data_offset": 2048, 00:11:35.391 "data_size": 63488 00:11:35.391 }, 00:11:35.391 { 00:11:35.391 "name": null, 00:11:35.391 "uuid": "4be05d64-e5ee-5784-8e65-d4aae41a6798", 00:11:35.391 "is_configured": false, 00:11:35.391 "data_offset": 2048, 00:11:35.391 "data_size": 63488 00:11:35.391 } 00:11:35.391 ] 00:11:35.391 }' 00:11:35.391 07:57:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:35.391 07:57:41 -- common/autotest_common.sh@10 -- # set +x 00:11:35.958 07:57:41 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:11:35.958 07:57:41 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:11:35.958 07:57:41 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:11:35.958 07:57:41 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:36.216 [2024-07-13 07:57:41.771974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:36.216 [2024-07-13 07:57:41.772062] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.216 [2024-07-13 07:57:41.772115] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002c480 00:11:36.216 [2024-07-13 07:57:41.772145] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.216 [2024-07-13 07:57:41.772403] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.216 [2024-07-13 07:57:41.772434] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:36.216 [2024-07-13 07:57:41.772688] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:11:36.216 [2024-07-13 07:57:41.772733] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:36.216 [2024-07-13 07:57:41.772810] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002be80 00:11:36.216 [2024-07-13 07:57:41.772819] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:36.216 [2024-07-13 07:57:41.772869] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:11:36.216 [2024-07-13 07:57:41.773024] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002be80 00:11:36.216 [2024-07-13 07:57:41.773035] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002be80 00:11:36.216 [2024-07-13 07:57:41.773083] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.216 pt2 00:11:36.216 07:57:41 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:11:36.216 07:57:41 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:11:36.216 07:57:41 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:36.216 07:57:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:36.216 07:57:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:36.216 07:57:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:36.216 07:57:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:36.216 07:57:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:36.216 07:57:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:36.216 07:57:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:36.216 07:57:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:36.216 07:57:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:36.216 07:57:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.216 07:57:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:36.216 07:57:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:36.216 "name": "raid_bdev1", 00:11:36.216 "uuid": "e2933055-c732-4f59-a242-0472327bdbf4", 00:11:36.216 "strip_size_kb": 0, 00:11:36.216 "state": "online", 00:11:36.216 "raid_level": "raid1", 00:11:36.216 "superblock": true, 00:11:36.216 "num_base_bdevs": 2, 00:11:36.216 "num_base_bdevs_discovered": 2, 00:11:36.216 "num_base_bdevs_operational": 2, 00:11:36.216 "base_bdevs_list": [ 00:11:36.216 { 00:11:36.216 "name": "pt1", 00:11:36.216 "uuid": "6aa57a3c-96af-5196-952c-ddd190f3348d", 00:11:36.216 "is_configured": true, 00:11:36.216 "data_offset": 2048, 00:11:36.216 "data_size": 63488 00:11:36.216 }, 00:11:36.216 { 00:11:36.216 "name": "pt2", 00:11:36.216 "uuid": "4be05d64-e5ee-5784-8e65-d4aae41a6798", 00:11:36.216 "is_configured": true, 00:11:36.216 "data_offset": 2048, 00:11:36.216 "data_size": 63488 00:11:36.216 } 00:11:36.216 ] 00:11:36.216 }' 00:11:36.217 07:57:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:36.217 07:57:41 -- common/autotest_common.sh@10 -- # set +x 00:11:36.782 07:57:42 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:36.782 07:57:42 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:11:37.040 [2024-07-13 07:57:42.772192] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:37.040 07:57:42 -- bdev/bdev_raid.sh@430 -- # '[' e2933055-c732-4f59-a242-0472327bdbf4 '!=' e2933055-c732-4f59-a242-0472327bdbf4 ']' 00:11:37.040 07:57:42 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:11:37.040 07:57:42 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:11:37.040 07:57:42 -- bdev/bdev_raid.sh@196 -- # return 0 00:11:37.040 07:57:42 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:37.296 [2024-07-13 07:57:43.004153] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:37.296 07:57:43 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:37.296 07:57:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:37.296 07:57:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:37.296 07:57:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:37.296 07:57:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:37.296 07:57:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:11:37.296 07:57:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:37.296 07:57:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:37.296 07:57:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:37.296 07:57:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:37.296 07:57:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.296 07:57:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.554 07:57:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:37.554 "name": "raid_bdev1", 00:11:37.554 "uuid": "e2933055-c732-4f59-a242-0472327bdbf4", 00:11:37.554 "strip_size_kb": 0, 00:11:37.554 "state": "online", 00:11:37.554 "raid_level": "raid1", 00:11:37.554 "superblock": true, 00:11:37.554 "num_base_bdevs": 2, 00:11:37.554 "num_base_bdevs_discovered": 1, 00:11:37.554 "num_base_bdevs_operational": 1, 00:11:37.554 "base_bdevs_list": [ 00:11:37.554 { 00:11:37.554 "name": null, 00:11:37.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.554 "is_configured": false, 00:11:37.554 "data_offset": 2048, 00:11:37.554 "data_size": 63488 00:11:37.554 }, 00:11:37.554 { 00:11:37.554 "name": "pt2", 00:11:37.554 "uuid": "4be05d64-e5ee-5784-8e65-d4aae41a6798", 00:11:37.554 "is_configured": true, 00:11:37.554 "data_offset": 2048, 00:11:37.554 "data_size": 63488 00:11:37.554 } 00:11:37.554 ] 00:11:37.554 }' 00:11:37.554 07:57:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:37.554 07:57:43 -- common/autotest_common.sh@10 -- # set +x 00:11:38.119 07:57:43 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:38.377 [2024-07-13 07:57:43.932193] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:38.377 [2024-07-13 07:57:43.932228] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:38.377 [2024-07-13 07:57:43.932296] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.377 [2024-07-13 07:57:43.932336] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.377 [2024-07-13 07:57:43.932345] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002be80 name raid_bdev1, state offline 00:11:38.377 07:57:43 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:11:38.377 07:57:43 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:38.377 07:57:44 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:11:38.377 07:57:44 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:11:38.377 07:57:44 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:11:38.377 07:57:44 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:11:38.377 07:57:44 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:38.636 07:57:44 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:11:38.636 07:57:44 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:11:38.636 07:57:44 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:11:38.636 07:57:44 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:11:38.636 07:57:44 -- bdev/bdev_raid.sh@462 -- # i=1 00:11:38.636 07:57:44 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:38.895 [2024-07-13 07:57:44.508220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:38.895 [2024-07-13 07:57:44.508321] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.895 [2024-07-13 07:57:44.508376] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d980 00:11:38.895 [2024-07-13 07:57:44.508406] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.895 [2024-07-13 07:57:44.510150] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.895 [2024-07-13 07:57:44.510204] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:38.895 [2024-07-13 07:57:44.510257] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:11:38.895 [2024-07-13 07:57:44.510286] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:38.895 [2024-07-13 07:57:44.510335] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002ee80 00:11:38.895 [2024-07-13 07:57:44.510343] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:38.895 [2024-07-13 07:57:44.510384] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:38.895 [2024-07-13 07:57:44.510552] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002ee80 00:11:38.895 [2024-07-13 07:57:44.510563] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002ee80 00:11:38.895 [2024-07-13 07:57:44.510613] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.895 pt2 00:11:38.895 07:57:44 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:38.895 07:57:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:38.895 07:57:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:38.895 07:57:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:38.895 07:57:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:38.895 07:57:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:11:38.895 07:57:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:38.895 07:57:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:38.895 07:57:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:38.895 07:57:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:38.895 07:57:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.895 07:57:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:39.154 07:57:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:39.154 "name": "raid_bdev1", 00:11:39.154 "uuid": "e2933055-c732-4f59-a242-0472327bdbf4", 00:11:39.154 "strip_size_kb": 0, 00:11:39.154 "state": "online", 00:11:39.154 "raid_level": "raid1", 00:11:39.154 "superblock": true, 00:11:39.154 "num_base_bdevs": 2, 00:11:39.154 "num_base_bdevs_discovered": 1, 00:11:39.154 "num_base_bdevs_operational": 1, 00:11:39.154 "base_bdevs_list": [ 00:11:39.154 { 00:11:39.154 "name": null, 00:11:39.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.154 "is_configured": false, 00:11:39.154 "data_offset": 2048, 00:11:39.154 "data_size": 63488 00:11:39.154 }, 00:11:39.154 { 00:11:39.154 "name": "pt2", 00:11:39.154 "uuid": "4be05d64-e5ee-5784-8e65-d4aae41a6798", 00:11:39.154 "is_configured": true, 00:11:39.154 "data_offset": 2048, 00:11:39.154 "data_size": 63488 00:11:39.154 } 00:11:39.154 ] 00:11:39.154 }' 00:11:39.154 07:57:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:39.154 07:57:44 -- common/autotest_common.sh@10 -- # set +x 00:11:39.721 07:57:45 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:11:39.721 07:57:45 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:39.721 07:57:45 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:11:39.721 [2024-07-13 07:57:45.456473] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:39.721 07:57:45 -- bdev/bdev_raid.sh@506 -- # '[' e2933055-c732-4f59-a242-0472327bdbf4 '!=' e2933055-c732-4f59-a242-0472327bdbf4 ']' 00:11:39.721 07:57:45 -- bdev/bdev_raid.sh@511 -- # killprocess 59822 00:11:39.721 07:57:45 -- common/autotest_common.sh@926 -- # '[' -z 59822 ']' 00:11:39.721 07:57:45 -- common/autotest_common.sh@930 -- # kill -0 59822 00:11:39.721 07:57:45 -- common/autotest_common.sh@931 -- # uname 00:11:39.721 07:57:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:39.721 07:57:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59822 00:11:39.721 killing process with pid 59822 00:11:39.721 07:57:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:39.721 07:57:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:39.721 07:57:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59822' 00:11:39.721 07:57:45 -- common/autotest_common.sh@945 -- # kill 59822 00:11:39.721 07:57:45 -- common/autotest_common.sh@950 -- # wait 59822 00:11:39.721 [2024-07-13 07:57:45.498952] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:39.721 [2024-07-13 07:57:45.499018] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:39.721 [2024-07-13 07:57:45.499049] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:39.721 [2024-07-13 07:57:45.499058] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002ee80 name raid_bdev1, state offline 00:11:39.721 [2024-07-13 07:57:45.519065] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:39.980 ************************************ 00:11:39.980 END TEST raid_superblock_test 00:11:39.980 ************************************ 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@513 -- # return 0 00:11:39.980 00:11:39.980 real 0m9.113s 00:11:39.980 user 0m16.656s 00:11:39.980 sys 0m1.238s 00:11:39.980 07:57:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.980 07:57:45 -- common/autotest_common.sh@10 -- # set +x 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:11:39.980 07:57:45 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:11:39.980 07:57:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:39.980 07:57:45 -- common/autotest_common.sh@10 -- # set +x 00:11:39.980 ************************************ 00:11:39.980 START TEST raid_state_function_test 00:11:39.980 ************************************ 00:11:39.980 07:57:45 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 false 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:39.980 Process raid pid: 60150 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@226 -- # raid_pid=60150 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 60150' 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@228 -- # waitforlisten 60150 /var/tmp/spdk-raid.sock 00:11:39.980 07:57:45 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:39.980 07:57:45 -- common/autotest_common.sh@819 -- # '[' -z 60150 ']' 00:11:39.980 07:57:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:39.980 07:57:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:39.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:39.980 07:57:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:39.980 07:57:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:39.980 07:57:45 -- common/autotest_common.sh@10 -- # set +x 00:11:40.239 [2024-07-13 07:57:45.911074] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:40.239 [2024-07-13 07:57:45.911317] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.497 [2024-07-13 07:57:46.060899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.497 [2024-07-13 07:57:46.114180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.497 [2024-07-13 07:57:46.164104] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.064 07:57:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:41.064 07:57:46 -- common/autotest_common.sh@852 -- # return 0 00:11:41.064 07:57:46 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:41.323 [2024-07-13 07:57:46.965707] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:41.323 [2024-07-13 07:57:46.965777] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:41.323 [2024-07-13 07:57:46.965789] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:41.323 [2024-07-13 07:57:46.965826] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:41.323 [2024-07-13 07:57:46.965833] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:41.323 [2024-07-13 07:57:46.965866] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:41.323 07:57:46 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:41.323 07:57:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:41.323 07:57:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:41.323 07:57:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:41.323 07:57:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:41.323 07:57:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:41.323 07:57:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:41.323 07:57:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:41.323 07:57:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:41.323 07:57:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:41.323 07:57:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.323 07:57:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.582 07:57:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:41.582 "name": "Existed_Raid", 00:11:41.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.582 "strip_size_kb": 64, 00:11:41.582 "state": "configuring", 00:11:41.582 "raid_level": "raid0", 00:11:41.582 "superblock": false, 00:11:41.582 "num_base_bdevs": 3, 00:11:41.582 "num_base_bdevs_discovered": 0, 00:11:41.582 "num_base_bdevs_operational": 3, 00:11:41.582 "base_bdevs_list": [ 00:11:41.582 { 00:11:41.582 "name": "BaseBdev1", 00:11:41.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.582 "is_configured": false, 00:11:41.582 "data_offset": 0, 00:11:41.582 "data_size": 0 00:11:41.582 }, 00:11:41.582 { 00:11:41.582 "name": "BaseBdev2", 00:11:41.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.582 "is_configured": false, 00:11:41.582 "data_offset": 0, 00:11:41.582 "data_size": 0 00:11:41.582 }, 00:11:41.582 { 00:11:41.582 "name": "BaseBdev3", 00:11:41.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.582 "is_configured": false, 00:11:41.582 "data_offset": 0, 00:11:41.582 "data_size": 0 00:11:41.582 } 00:11:41.582 ] 00:11:41.582 }' 00:11:41.582 07:57:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:41.582 07:57:47 -- common/autotest_common.sh@10 -- # set +x 00:11:42.150 07:57:47 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:42.407 [2024-07-13 07:57:47.981755] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:42.407 [2024-07-13 07:57:47.981792] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000025880 name Existed_Raid, state configuring 00:11:42.407 07:57:47 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:42.407 [2024-07-13 07:57:48.125828] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:42.407 [2024-07-13 07:57:48.125885] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:42.407 [2024-07-13 07:57:48.125895] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:42.407 [2024-07-13 07:57:48.125911] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:42.407 [2024-07-13 07:57:48.125918] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:42.407 [2024-07-13 07:57:48.125941] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:42.407 07:57:48 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:42.665 BaseBdev1 00:11:42.665 [2024-07-13 07:57:48.283585] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:42.665 07:57:48 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:11:42.665 07:57:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:11:42.665 07:57:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:42.665 07:57:48 -- common/autotest_common.sh@889 -- # local i 00:11:42.665 07:57:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:42.665 07:57:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:42.665 07:57:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:42.923 07:57:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:42.923 [ 00:11:42.923 { 00:11:42.923 "name": "BaseBdev1", 00:11:42.923 "aliases": [ 00:11:42.923 "b873ccf0-ebbf-4087-8917-47c398dd65ac" 00:11:42.923 ], 00:11:42.923 "product_name": "Malloc disk", 00:11:42.923 "block_size": 512, 00:11:42.923 "num_blocks": 65536, 00:11:42.923 "uuid": "b873ccf0-ebbf-4087-8917-47c398dd65ac", 00:11:42.923 "assigned_rate_limits": { 00:11:42.923 "rw_ios_per_sec": 0, 00:11:42.923 "rw_mbytes_per_sec": 0, 00:11:42.923 "r_mbytes_per_sec": 0, 00:11:42.923 "w_mbytes_per_sec": 0 00:11:42.923 }, 00:11:42.923 "claimed": true, 00:11:42.923 "claim_type": "exclusive_write", 00:11:42.923 "zoned": false, 00:11:42.923 "supported_io_types": { 00:11:42.923 "read": true, 00:11:42.923 "write": true, 00:11:42.923 "unmap": true, 00:11:42.923 "write_zeroes": true, 00:11:42.923 "flush": true, 00:11:42.923 "reset": true, 00:11:42.923 "compare": false, 00:11:42.923 "compare_and_write": false, 00:11:42.923 "abort": true, 00:11:42.923 "nvme_admin": false, 00:11:42.923 "nvme_io": false 00:11:42.923 }, 00:11:42.923 "memory_domains": [ 00:11:42.923 { 00:11:42.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.923 "dma_device_type": 2 00:11:42.923 } 00:11:42.923 ], 00:11:42.923 "driver_specific": {} 00:11:42.923 } 00:11:42.923 ] 00:11:42.923 07:57:48 -- common/autotest_common.sh@895 -- # return 0 00:11:42.923 07:57:48 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:42.923 07:57:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:42.923 07:57:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:42.923 07:57:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:42.923 07:57:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:42.923 07:57:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:42.923 07:57:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:42.923 07:57:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:42.923 07:57:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:42.923 07:57:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:42.923 07:57:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.923 07:57:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.181 07:57:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:43.181 "name": "Existed_Raid", 00:11:43.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.181 "strip_size_kb": 64, 00:11:43.181 "state": "configuring", 00:11:43.181 "raid_level": "raid0", 00:11:43.181 "superblock": false, 00:11:43.181 "num_base_bdevs": 3, 00:11:43.181 "num_base_bdevs_discovered": 1, 00:11:43.181 "num_base_bdevs_operational": 3, 00:11:43.181 "base_bdevs_list": [ 00:11:43.181 { 00:11:43.181 "name": "BaseBdev1", 00:11:43.181 "uuid": "b873ccf0-ebbf-4087-8917-47c398dd65ac", 00:11:43.181 "is_configured": true, 00:11:43.181 "data_offset": 0, 00:11:43.181 "data_size": 65536 00:11:43.181 }, 00:11:43.181 { 00:11:43.181 "name": "BaseBdev2", 00:11:43.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.181 "is_configured": false, 00:11:43.181 "data_offset": 0, 00:11:43.181 "data_size": 0 00:11:43.181 }, 00:11:43.181 { 00:11:43.181 "name": "BaseBdev3", 00:11:43.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.181 "is_configured": false, 00:11:43.181 "data_offset": 0, 00:11:43.181 "data_size": 0 00:11:43.181 } 00:11:43.181 ] 00:11:43.181 }' 00:11:43.181 07:57:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:43.181 07:57:48 -- common/autotest_common.sh@10 -- # set +x 00:11:43.748 07:57:49 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:44.007 [2024-07-13 07:57:49.623828] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:44.007 [2024-07-13 07:57:49.623869] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026480 name Existed_Raid, state configuring 00:11:44.007 07:57:49 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:11:44.007 07:57:49 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:44.007 [2024-07-13 07:57:49.767905] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:44.007 [2024-07-13 07:57:49.769396] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:44.007 [2024-07-13 07:57:49.769451] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:44.007 [2024-07-13 07:57:49.769471] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:44.007 [2024-07-13 07:57:49.769497] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:44.007 07:57:49 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:11:44.007 07:57:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:44.007 07:57:49 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:44.007 07:57:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:44.007 07:57:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:44.007 07:57:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:44.007 07:57:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:44.007 07:57:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:44.007 07:57:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:44.007 07:57:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:44.007 07:57:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:44.007 07:57:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:44.007 07:57:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.007 07:57:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:44.268 07:57:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:44.268 "name": "Existed_Raid", 00:11:44.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.268 "strip_size_kb": 64, 00:11:44.268 "state": "configuring", 00:11:44.268 "raid_level": "raid0", 00:11:44.268 "superblock": false, 00:11:44.268 "num_base_bdevs": 3, 00:11:44.268 "num_base_bdevs_discovered": 1, 00:11:44.268 "num_base_bdevs_operational": 3, 00:11:44.268 "base_bdevs_list": [ 00:11:44.268 { 00:11:44.268 "name": "BaseBdev1", 00:11:44.268 "uuid": "b873ccf0-ebbf-4087-8917-47c398dd65ac", 00:11:44.268 "is_configured": true, 00:11:44.268 "data_offset": 0, 00:11:44.268 "data_size": 65536 00:11:44.268 }, 00:11:44.268 { 00:11:44.268 "name": "BaseBdev2", 00:11:44.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.268 "is_configured": false, 00:11:44.268 "data_offset": 0, 00:11:44.268 "data_size": 0 00:11:44.268 }, 00:11:44.268 { 00:11:44.268 "name": "BaseBdev3", 00:11:44.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.268 "is_configured": false, 00:11:44.268 "data_offset": 0, 00:11:44.268 "data_size": 0 00:11:44.268 } 00:11:44.268 ] 00:11:44.268 }' 00:11:44.268 07:57:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:44.268 07:57:50 -- common/autotest_common.sh@10 -- # set +x 00:11:44.875 07:57:50 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:45.136 BaseBdev2 00:11:45.136 [2024-07-13 07:57:50.770882] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:45.136 07:57:50 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:11:45.136 07:57:50 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:11:45.136 07:57:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:45.136 07:57:50 -- common/autotest_common.sh@889 -- # local i 00:11:45.136 07:57:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:45.136 07:57:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:45.136 07:57:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:45.394 07:57:50 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:45.394 [ 00:11:45.394 { 00:11:45.394 "name": "BaseBdev2", 00:11:45.394 "aliases": [ 00:11:45.394 "808a66ab-6b18-41fd-bd61-6e54a21fe871" 00:11:45.394 ], 00:11:45.394 "product_name": "Malloc disk", 00:11:45.394 "block_size": 512, 00:11:45.394 "num_blocks": 65536, 00:11:45.394 "uuid": "808a66ab-6b18-41fd-bd61-6e54a21fe871", 00:11:45.394 "assigned_rate_limits": { 00:11:45.394 "rw_ios_per_sec": 0, 00:11:45.394 "rw_mbytes_per_sec": 0, 00:11:45.394 "r_mbytes_per_sec": 0, 00:11:45.394 "w_mbytes_per_sec": 0 00:11:45.394 }, 00:11:45.394 "claimed": true, 00:11:45.394 "claim_type": "exclusive_write", 00:11:45.394 "zoned": false, 00:11:45.394 "supported_io_types": { 00:11:45.394 "read": true, 00:11:45.394 "write": true, 00:11:45.394 "unmap": true, 00:11:45.394 "write_zeroes": true, 00:11:45.394 "flush": true, 00:11:45.394 "reset": true, 00:11:45.394 "compare": false, 00:11:45.394 "compare_and_write": false, 00:11:45.394 "abort": true, 00:11:45.394 "nvme_admin": false, 00:11:45.394 "nvme_io": false 00:11:45.394 }, 00:11:45.394 "memory_domains": [ 00:11:45.394 { 00:11:45.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.394 "dma_device_type": 2 00:11:45.394 } 00:11:45.394 ], 00:11:45.394 "driver_specific": {} 00:11:45.394 } 00:11:45.394 ] 00:11:45.394 07:57:51 -- common/autotest_common.sh@895 -- # return 0 00:11:45.394 07:57:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:45.394 07:57:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:45.394 07:57:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:45.394 07:57:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:45.394 07:57:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:45.394 07:57:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:45.394 07:57:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:45.394 07:57:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:45.394 07:57:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:45.394 07:57:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:45.394 07:57:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:45.394 07:57:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:45.394 07:57:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.394 07:57:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.652 07:57:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:45.652 "name": "Existed_Raid", 00:11:45.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.652 "strip_size_kb": 64, 00:11:45.652 "state": "configuring", 00:11:45.652 "raid_level": "raid0", 00:11:45.652 "superblock": false, 00:11:45.652 "num_base_bdevs": 3, 00:11:45.652 "num_base_bdevs_discovered": 2, 00:11:45.652 "num_base_bdevs_operational": 3, 00:11:45.652 "base_bdevs_list": [ 00:11:45.652 { 00:11:45.652 "name": "BaseBdev1", 00:11:45.652 "uuid": "b873ccf0-ebbf-4087-8917-47c398dd65ac", 00:11:45.652 "is_configured": true, 00:11:45.652 "data_offset": 0, 00:11:45.652 "data_size": 65536 00:11:45.652 }, 00:11:45.652 { 00:11:45.652 "name": "BaseBdev2", 00:11:45.652 "uuid": "808a66ab-6b18-41fd-bd61-6e54a21fe871", 00:11:45.652 "is_configured": true, 00:11:45.652 "data_offset": 0, 00:11:45.652 "data_size": 65536 00:11:45.652 }, 00:11:45.652 { 00:11:45.652 "name": "BaseBdev3", 00:11:45.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.652 "is_configured": false, 00:11:45.652 "data_offset": 0, 00:11:45.652 "data_size": 0 00:11:45.652 } 00:11:45.652 ] 00:11:45.652 }' 00:11:45.652 07:57:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:45.652 07:57:51 -- common/autotest_common.sh@10 -- # set +x 00:11:46.219 07:57:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:46.478 BaseBdev3 00:11:46.478 [2024-07-13 07:57:52.046664] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:46.478 [2024-07-13 07:57:52.046705] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000027680 00:11:46.478 [2024-07-13 07:57:52.046714] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:46.478 [2024-07-13 07:57:52.046794] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:11:46.478 [2024-07-13 07:57:52.046968] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000027680 00:11:46.478 [2024-07-13 07:57:52.046977] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000027680 00:11:46.478 [2024-07-13 07:57:52.047116] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.478 07:57:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:11:46.478 07:57:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:11:46.478 07:57:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:46.478 07:57:52 -- common/autotest_common.sh@889 -- # local i 00:11:46.478 07:57:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:46.478 07:57:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:46.478 07:57:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:46.478 07:57:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:46.736 [ 00:11:46.736 { 00:11:46.736 "name": "BaseBdev3", 00:11:46.736 "aliases": [ 00:11:46.736 "13225b3f-7cd5-4da8-9a7a-79837f417b23" 00:11:46.736 ], 00:11:46.736 "product_name": "Malloc disk", 00:11:46.736 "block_size": 512, 00:11:46.736 "num_blocks": 65536, 00:11:46.736 "uuid": "13225b3f-7cd5-4da8-9a7a-79837f417b23", 00:11:46.736 "assigned_rate_limits": { 00:11:46.736 "rw_ios_per_sec": 0, 00:11:46.736 "rw_mbytes_per_sec": 0, 00:11:46.736 "r_mbytes_per_sec": 0, 00:11:46.736 "w_mbytes_per_sec": 0 00:11:46.736 }, 00:11:46.736 "claimed": true, 00:11:46.736 "claim_type": "exclusive_write", 00:11:46.736 "zoned": false, 00:11:46.736 "supported_io_types": { 00:11:46.736 "read": true, 00:11:46.736 "write": true, 00:11:46.736 "unmap": true, 00:11:46.736 "write_zeroes": true, 00:11:46.736 "flush": true, 00:11:46.736 "reset": true, 00:11:46.736 "compare": false, 00:11:46.736 "compare_and_write": false, 00:11:46.736 "abort": true, 00:11:46.736 "nvme_admin": false, 00:11:46.736 "nvme_io": false 00:11:46.736 }, 00:11:46.736 "memory_domains": [ 00:11:46.736 { 00:11:46.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.736 "dma_device_type": 2 00:11:46.736 } 00:11:46.736 ], 00:11:46.736 "driver_specific": {} 00:11:46.736 } 00:11:46.736 ] 00:11:46.736 07:57:52 -- common/autotest_common.sh@895 -- # return 0 00:11:46.736 07:57:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:46.736 07:57:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:46.736 07:57:52 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:46.736 07:57:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:46.736 07:57:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:46.736 07:57:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:46.736 07:57:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:46.736 07:57:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:46.736 07:57:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:46.736 07:57:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:46.736 07:57:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:46.736 07:57:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:46.736 07:57:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:46.736 07:57:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.995 07:57:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:46.995 "name": "Existed_Raid", 00:11:46.995 "uuid": "3af2e83d-7606-4d62-9249-a0f67dcf0f51", 00:11:46.995 "strip_size_kb": 64, 00:11:46.995 "state": "online", 00:11:46.995 "raid_level": "raid0", 00:11:46.995 "superblock": false, 00:11:46.995 "num_base_bdevs": 3, 00:11:46.995 "num_base_bdevs_discovered": 3, 00:11:46.995 "num_base_bdevs_operational": 3, 00:11:46.995 "base_bdevs_list": [ 00:11:46.995 { 00:11:46.995 "name": "BaseBdev1", 00:11:46.995 "uuid": "b873ccf0-ebbf-4087-8917-47c398dd65ac", 00:11:46.995 "is_configured": true, 00:11:46.995 "data_offset": 0, 00:11:46.995 "data_size": 65536 00:11:46.995 }, 00:11:46.995 { 00:11:46.995 "name": "BaseBdev2", 00:11:46.995 "uuid": "808a66ab-6b18-41fd-bd61-6e54a21fe871", 00:11:46.995 "is_configured": true, 00:11:46.995 "data_offset": 0, 00:11:46.995 "data_size": 65536 00:11:46.995 }, 00:11:46.995 { 00:11:46.995 "name": "BaseBdev3", 00:11:46.995 "uuid": "13225b3f-7cd5-4da8-9a7a-79837f417b23", 00:11:46.995 "is_configured": true, 00:11:46.995 "data_offset": 0, 00:11:46.995 "data_size": 65536 00:11:46.995 } 00:11:46.995 ] 00:11:46.995 }' 00:11:46.995 07:57:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:46.995 07:57:52 -- common/autotest_common.sh@10 -- # set +x 00:11:47.560 07:57:53 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:47.560 [2024-07-13 07:57:53.283014] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:47.560 [2024-07-13 07:57:53.283061] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.560 [2024-07-13 07:57:53.283112] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.560 07:57:53 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:11:47.560 07:57:53 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:11:47.560 07:57:53 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:11:47.560 07:57:53 -- bdev/bdev_raid.sh@197 -- # return 1 00:11:47.560 07:57:53 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:11:47.560 07:57:53 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:47.560 07:57:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:47.560 07:57:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:11:47.560 07:57:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:47.560 07:57:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:47.560 07:57:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:47.560 07:57:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:47.560 07:57:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:47.560 07:57:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:47.560 07:57:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:47.560 07:57:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:47.560 07:57:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.818 07:57:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:47.818 "name": "Existed_Raid", 00:11:47.818 "uuid": "3af2e83d-7606-4d62-9249-a0f67dcf0f51", 00:11:47.818 "strip_size_kb": 64, 00:11:47.818 "state": "offline", 00:11:47.818 "raid_level": "raid0", 00:11:47.818 "superblock": false, 00:11:47.818 "num_base_bdevs": 3, 00:11:47.818 "num_base_bdevs_discovered": 2, 00:11:47.818 "num_base_bdevs_operational": 2, 00:11:47.818 "base_bdevs_list": [ 00:11:47.818 { 00:11:47.818 "name": null, 00:11:47.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.818 "is_configured": false, 00:11:47.818 "data_offset": 0, 00:11:47.818 "data_size": 65536 00:11:47.818 }, 00:11:47.818 { 00:11:47.818 "name": "BaseBdev2", 00:11:47.818 "uuid": "808a66ab-6b18-41fd-bd61-6e54a21fe871", 00:11:47.818 "is_configured": true, 00:11:47.818 "data_offset": 0, 00:11:47.818 "data_size": 65536 00:11:47.818 }, 00:11:47.818 { 00:11:47.818 "name": "BaseBdev3", 00:11:47.818 "uuid": "13225b3f-7cd5-4da8-9a7a-79837f417b23", 00:11:47.818 "is_configured": true, 00:11:47.818 "data_offset": 0, 00:11:47.818 "data_size": 65536 00:11:47.818 } 00:11:47.818 ] 00:11:47.818 }' 00:11:47.818 07:57:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:47.818 07:57:53 -- common/autotest_common.sh@10 -- # set +x 00:11:48.385 07:57:54 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:11:48.385 07:57:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:48.385 07:57:54 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:48.385 07:57:54 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:48.644 07:57:54 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:48.644 07:57:54 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:48.644 07:57:54 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:48.644 [2024-07-13 07:57:54.391996] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:48.644 07:57:54 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:48.644 07:57:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:48.644 07:57:54 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:48.644 07:57:54 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:48.903 07:57:54 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:48.903 07:57:54 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:48.903 07:57:54 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:49.162 [2024-07-13 07:57:54.718495] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:49.162 [2024-07-13 07:57:54.718557] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027680 name Existed_Raid, state offline 00:11:49.162 07:57:54 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:49.162 07:57:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:49.162 07:57:54 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:49.162 07:57:54 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:11:49.162 07:57:54 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:11:49.162 07:57:54 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:11:49.162 07:57:54 -- bdev/bdev_raid.sh@287 -- # killprocess 60150 00:11:49.162 07:57:54 -- common/autotest_common.sh@926 -- # '[' -z 60150 ']' 00:11:49.162 07:57:54 -- common/autotest_common.sh@930 -- # kill -0 60150 00:11:49.162 07:57:54 -- common/autotest_common.sh@931 -- # uname 00:11:49.162 07:57:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:49.420 07:57:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60150 00:11:49.420 killing process with pid 60150 00:11:49.420 07:57:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:49.420 07:57:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:49.420 07:57:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60150' 00:11:49.420 07:57:54 -- common/autotest_common.sh@945 -- # kill 60150 00:11:49.420 07:57:54 -- common/autotest_common.sh@950 -- # wait 60150 00:11:49.420 [2024-07-13 07:57:54.996981] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:49.420 [2024-07-13 07:57:54.997035] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:49.420 07:57:55 -- bdev/bdev_raid.sh@289 -- # return 0 00:11:49.420 00:11:49.420 real 0m9.424s 00:11:49.420 user 0m17.142s 00:11:49.420 sys 0m1.305s 00:11:49.420 07:57:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:49.420 07:57:55 -- common/autotest_common.sh@10 -- # set +x 00:11:49.420 ************************************ 00:11:49.420 END TEST raid_state_function_test 00:11:49.420 ************************************ 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:11:49.679 07:57:55 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:11:49.679 07:57:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:49.679 07:57:55 -- common/autotest_common.sh@10 -- # set +x 00:11:49.679 ************************************ 00:11:49.679 START TEST raid_state_function_test_sb 00:11:49.679 ************************************ 00:11:49.679 07:57:55 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 true 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:49.679 Process raid pid: 60511 00:11:49.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@226 -- # raid_pid=60511 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 60511' 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@228 -- # waitforlisten 60511 /var/tmp/spdk-raid.sock 00:11:49.679 07:57:55 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:49.679 07:57:55 -- common/autotest_common.sh@819 -- # '[' -z 60511 ']' 00:11:49.679 07:57:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:49.679 07:57:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:49.679 07:57:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:49.679 07:57:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:49.679 07:57:55 -- common/autotest_common.sh@10 -- # set +x 00:11:49.679 [2024-07-13 07:57:55.390786] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:49.679 [2024-07-13 07:57:55.391019] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.937 [2024-07-13 07:57:55.543018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.937 [2024-07-13 07:57:55.596509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.937 [2024-07-13 07:57:55.646489] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.506 07:57:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:50.506 07:57:56 -- common/autotest_common.sh@852 -- # return 0 00:11:50.506 07:57:56 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:50.765 [2024-07-13 07:57:56.351817] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:50.765 [2024-07-13 07:57:56.351883] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:50.765 [2024-07-13 07:57:56.351894] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:50.765 [2024-07-13 07:57:56.351916] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:50.765 [2024-07-13 07:57:56.351923] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:50.765 [2024-07-13 07:57:56.351957] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:50.765 07:57:56 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:50.765 07:57:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:50.765 07:57:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:50.765 07:57:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:50.765 07:57:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:50.765 07:57:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:50.765 07:57:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:50.765 07:57:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:50.765 07:57:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:50.765 07:57:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:50.765 07:57:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:50.765 07:57:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.765 07:57:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:50.765 "name": "Existed_Raid", 00:11:50.765 "uuid": "51faafee-e24e-4051-8df9-2835948bf902", 00:11:50.765 "strip_size_kb": 64, 00:11:50.765 "state": "configuring", 00:11:50.765 "raid_level": "raid0", 00:11:50.765 "superblock": true, 00:11:50.765 "num_base_bdevs": 3, 00:11:50.765 "num_base_bdevs_discovered": 0, 00:11:50.765 "num_base_bdevs_operational": 3, 00:11:50.765 "base_bdevs_list": [ 00:11:50.765 { 00:11:50.765 "name": "BaseBdev1", 00:11:50.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.765 "is_configured": false, 00:11:50.765 "data_offset": 0, 00:11:50.765 "data_size": 0 00:11:50.765 }, 00:11:50.765 { 00:11:50.765 "name": "BaseBdev2", 00:11:50.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.765 "is_configured": false, 00:11:50.765 "data_offset": 0, 00:11:50.765 "data_size": 0 00:11:50.765 }, 00:11:50.765 { 00:11:50.765 "name": "BaseBdev3", 00:11:50.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.765 "is_configured": false, 00:11:50.765 "data_offset": 0, 00:11:50.765 "data_size": 0 00:11:50.765 } 00:11:50.765 ] 00:11:50.765 }' 00:11:50.765 07:57:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:50.765 07:57:56 -- common/autotest_common.sh@10 -- # set +x 00:11:51.332 07:57:57 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:51.591 [2024-07-13 07:57:57.283777] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:51.591 [2024-07-13 07:57:57.283814] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000025880 name Existed_Raid, state configuring 00:11:51.591 07:57:57 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:51.849 [2024-07-13 07:57:57.463879] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:51.849 [2024-07-13 07:57:57.463943] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:51.849 [2024-07-13 07:57:57.463954] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:51.849 [2024-07-13 07:57:57.463970] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:51.849 [2024-07-13 07:57:57.463977] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:51.849 [2024-07-13 07:57:57.463998] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:51.849 07:57:57 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:51.849 BaseBdev1 00:11:51.849 [2024-07-13 07:57:57.633551] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.849 07:57:57 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:11:51.849 07:57:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:11:51.849 07:57:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:51.849 07:57:57 -- common/autotest_common.sh@889 -- # local i 00:11:51.849 07:57:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:51.849 07:57:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:51.849 07:57:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:52.107 07:57:57 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:52.366 [ 00:11:52.366 { 00:11:52.366 "name": "BaseBdev1", 00:11:52.366 "aliases": [ 00:11:52.366 "5b48f8dd-0824-4782-8d67-d1f6fe2934ba" 00:11:52.366 ], 00:11:52.366 "product_name": "Malloc disk", 00:11:52.366 "block_size": 512, 00:11:52.366 "num_blocks": 65536, 00:11:52.366 "uuid": "5b48f8dd-0824-4782-8d67-d1f6fe2934ba", 00:11:52.366 "assigned_rate_limits": { 00:11:52.366 "rw_ios_per_sec": 0, 00:11:52.366 "rw_mbytes_per_sec": 0, 00:11:52.366 "r_mbytes_per_sec": 0, 00:11:52.366 "w_mbytes_per_sec": 0 00:11:52.366 }, 00:11:52.366 "claimed": true, 00:11:52.366 "claim_type": "exclusive_write", 00:11:52.366 "zoned": false, 00:11:52.366 "supported_io_types": { 00:11:52.366 "read": true, 00:11:52.366 "write": true, 00:11:52.366 "unmap": true, 00:11:52.366 "write_zeroes": true, 00:11:52.366 "flush": true, 00:11:52.366 "reset": true, 00:11:52.366 "compare": false, 00:11:52.366 "compare_and_write": false, 00:11:52.366 "abort": true, 00:11:52.366 "nvme_admin": false, 00:11:52.366 "nvme_io": false 00:11:52.366 }, 00:11:52.366 "memory_domains": [ 00:11:52.366 { 00:11:52.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.366 "dma_device_type": 2 00:11:52.366 } 00:11:52.366 ], 00:11:52.366 "driver_specific": {} 00:11:52.366 } 00:11:52.366 ] 00:11:52.366 07:57:57 -- common/autotest_common.sh@895 -- # return 0 00:11:52.366 07:57:57 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:52.366 07:57:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:52.366 07:57:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:52.366 07:57:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:52.366 07:57:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:52.366 07:57:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:52.366 07:57:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:52.366 07:57:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:52.366 07:57:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:52.366 07:57:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:52.366 07:57:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.366 07:57:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.366 07:57:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:52.366 "name": "Existed_Raid", 00:11:52.366 "uuid": "02a92e4f-0919-495b-989c-2aeb998f5b57", 00:11:52.366 "strip_size_kb": 64, 00:11:52.366 "state": "configuring", 00:11:52.366 "raid_level": "raid0", 00:11:52.366 "superblock": true, 00:11:52.366 "num_base_bdevs": 3, 00:11:52.366 "num_base_bdevs_discovered": 1, 00:11:52.366 "num_base_bdevs_operational": 3, 00:11:52.366 "base_bdevs_list": [ 00:11:52.366 { 00:11:52.366 "name": "BaseBdev1", 00:11:52.366 "uuid": "5b48f8dd-0824-4782-8d67-d1f6fe2934ba", 00:11:52.366 "is_configured": true, 00:11:52.366 "data_offset": 2048, 00:11:52.366 "data_size": 63488 00:11:52.366 }, 00:11:52.366 { 00:11:52.366 "name": "BaseBdev2", 00:11:52.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.366 "is_configured": false, 00:11:52.366 "data_offset": 0, 00:11:52.366 "data_size": 0 00:11:52.366 }, 00:11:52.366 { 00:11:52.366 "name": "BaseBdev3", 00:11:52.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.366 "is_configured": false, 00:11:52.366 "data_offset": 0, 00:11:52.366 "data_size": 0 00:11:52.366 } 00:11:52.366 ] 00:11:52.366 }' 00:11:52.366 07:57:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:52.366 07:57:58 -- common/autotest_common.sh@10 -- # set +x 00:11:53.315 07:57:58 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:53.315 [2024-07-13 07:57:58.933726] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:53.315 [2024-07-13 07:57:58.933776] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026480 name Existed_Raid, state configuring 00:11:53.315 07:57:58 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:11:53.315 07:57:58 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:53.315 07:57:59 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:53.573 BaseBdev1 00:11:53.573 07:57:59 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:11:53.573 07:57:59 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:11:53.573 07:57:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:53.573 07:57:59 -- common/autotest_common.sh@889 -- # local i 00:11:53.573 07:57:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:53.573 07:57:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:53.573 07:57:59 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:53.832 07:57:59 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:53.832 [ 00:11:53.832 { 00:11:53.832 "name": "BaseBdev1", 00:11:53.832 "aliases": [ 00:11:53.832 "2e49f2b2-5c48-4333-8a46-48721249c696" 00:11:53.832 ], 00:11:53.832 "product_name": "Malloc disk", 00:11:53.832 "block_size": 512, 00:11:53.832 "num_blocks": 65536, 00:11:53.832 "uuid": "2e49f2b2-5c48-4333-8a46-48721249c696", 00:11:53.832 "assigned_rate_limits": { 00:11:53.832 "rw_ios_per_sec": 0, 00:11:53.832 "rw_mbytes_per_sec": 0, 00:11:53.832 "r_mbytes_per_sec": 0, 00:11:53.832 "w_mbytes_per_sec": 0 00:11:53.832 }, 00:11:53.832 "claimed": false, 00:11:53.832 "zoned": false, 00:11:53.832 "supported_io_types": { 00:11:53.832 "read": true, 00:11:53.832 "write": true, 00:11:53.832 "unmap": true, 00:11:53.832 "write_zeroes": true, 00:11:53.832 "flush": true, 00:11:53.832 "reset": true, 00:11:53.832 "compare": false, 00:11:53.832 "compare_and_write": false, 00:11:53.832 "abort": true, 00:11:53.832 "nvme_admin": false, 00:11:53.832 "nvme_io": false 00:11:53.832 }, 00:11:53.832 "memory_domains": [ 00:11:53.832 { 00:11:53.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.832 "dma_device_type": 2 00:11:53.832 } 00:11:53.832 ], 00:11:53.832 "driver_specific": {} 00:11:53.832 } 00:11:53.832 ] 00:11:53.832 07:57:59 -- common/autotest_common.sh@895 -- # return 0 00:11:53.832 07:57:59 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:54.091 [2024-07-13 07:57:59.754103] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:54.091 [2024-07-13 07:57:59.755616] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:54.091 [2024-07-13 07:57:59.755670] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:54.091 [2024-07-13 07:57:59.755681] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:54.091 [2024-07-13 07:57:59.755702] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:54.091 07:57:59 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:11:54.091 07:57:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:54.091 07:57:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:54.091 07:57:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:54.091 07:57:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:54.091 07:57:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:54.091 07:57:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:54.091 07:57:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:54.091 07:57:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:54.091 07:57:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:54.091 07:57:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:54.091 07:57:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:54.091 07:57:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:54.091 07:57:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.349 07:57:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:54.349 "name": "Existed_Raid", 00:11:54.349 "uuid": "7d537f5b-1cae-4a62-b102-4b803cb595fc", 00:11:54.349 "strip_size_kb": 64, 00:11:54.349 "state": "configuring", 00:11:54.349 "raid_level": "raid0", 00:11:54.349 "superblock": true, 00:11:54.349 "num_base_bdevs": 3, 00:11:54.349 "num_base_bdevs_discovered": 1, 00:11:54.349 "num_base_bdevs_operational": 3, 00:11:54.349 "base_bdevs_list": [ 00:11:54.349 { 00:11:54.349 "name": "BaseBdev1", 00:11:54.349 "uuid": "2e49f2b2-5c48-4333-8a46-48721249c696", 00:11:54.349 "is_configured": true, 00:11:54.349 "data_offset": 2048, 00:11:54.349 "data_size": 63488 00:11:54.349 }, 00:11:54.349 { 00:11:54.349 "name": "BaseBdev2", 00:11:54.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.349 "is_configured": false, 00:11:54.349 "data_offset": 0, 00:11:54.349 "data_size": 0 00:11:54.349 }, 00:11:54.349 { 00:11:54.349 "name": "BaseBdev3", 00:11:54.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.349 "is_configured": false, 00:11:54.349 "data_offset": 0, 00:11:54.349 "data_size": 0 00:11:54.349 } 00:11:54.349 ] 00:11:54.349 }' 00:11:54.349 07:57:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:54.349 07:57:59 -- common/autotest_common.sh@10 -- # set +x 00:11:54.915 07:58:00 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:54.915 BaseBdev2 00:11:54.915 [2024-07-13 07:58:00.657724] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:54.915 07:58:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:11:54.915 07:58:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:11:54.915 07:58:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:54.915 07:58:00 -- common/autotest_common.sh@889 -- # local i 00:11:54.915 07:58:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:54.915 07:58:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:54.915 07:58:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:55.174 07:58:00 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:55.432 [ 00:11:55.432 { 00:11:55.432 "name": "BaseBdev2", 00:11:55.432 "aliases": [ 00:11:55.432 "14463e88-e691-4f0e-870e-ff50494b43b8" 00:11:55.432 ], 00:11:55.432 "product_name": "Malloc disk", 00:11:55.432 "block_size": 512, 00:11:55.432 "num_blocks": 65536, 00:11:55.432 "uuid": "14463e88-e691-4f0e-870e-ff50494b43b8", 00:11:55.432 "assigned_rate_limits": { 00:11:55.432 "rw_ios_per_sec": 0, 00:11:55.432 "rw_mbytes_per_sec": 0, 00:11:55.432 "r_mbytes_per_sec": 0, 00:11:55.432 "w_mbytes_per_sec": 0 00:11:55.432 }, 00:11:55.432 "claimed": true, 00:11:55.432 "claim_type": "exclusive_write", 00:11:55.432 "zoned": false, 00:11:55.432 "supported_io_types": { 00:11:55.432 "read": true, 00:11:55.432 "write": true, 00:11:55.432 "unmap": true, 00:11:55.432 "write_zeroes": true, 00:11:55.433 "flush": true, 00:11:55.433 "reset": true, 00:11:55.433 "compare": false, 00:11:55.433 "compare_and_write": false, 00:11:55.433 "abort": true, 00:11:55.433 "nvme_admin": false, 00:11:55.433 "nvme_io": false 00:11:55.433 }, 00:11:55.433 "memory_domains": [ 00:11:55.433 { 00:11:55.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.433 "dma_device_type": 2 00:11:55.433 } 00:11:55.433 ], 00:11:55.433 "driver_specific": {} 00:11:55.433 } 00:11:55.433 ] 00:11:55.433 07:58:01 -- common/autotest_common.sh@895 -- # return 0 00:11:55.433 07:58:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:55.433 07:58:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:55.433 07:58:01 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:55.433 07:58:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:55.433 07:58:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:55.433 07:58:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:55.433 07:58:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:55.433 07:58:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:55.433 07:58:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:55.433 07:58:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:55.433 07:58:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:55.433 07:58:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:55.433 07:58:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.433 07:58:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:55.691 07:58:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:55.691 "name": "Existed_Raid", 00:11:55.691 "uuid": "7d537f5b-1cae-4a62-b102-4b803cb595fc", 00:11:55.691 "strip_size_kb": 64, 00:11:55.691 "state": "configuring", 00:11:55.691 "raid_level": "raid0", 00:11:55.691 "superblock": true, 00:11:55.691 "num_base_bdevs": 3, 00:11:55.691 "num_base_bdevs_discovered": 2, 00:11:55.691 "num_base_bdevs_operational": 3, 00:11:55.691 "base_bdevs_list": [ 00:11:55.691 { 00:11:55.691 "name": "BaseBdev1", 00:11:55.691 "uuid": "2e49f2b2-5c48-4333-8a46-48721249c696", 00:11:55.691 "is_configured": true, 00:11:55.691 "data_offset": 2048, 00:11:55.691 "data_size": 63488 00:11:55.691 }, 00:11:55.691 { 00:11:55.691 "name": "BaseBdev2", 00:11:55.691 "uuid": "14463e88-e691-4f0e-870e-ff50494b43b8", 00:11:55.691 "is_configured": true, 00:11:55.691 "data_offset": 2048, 00:11:55.691 "data_size": 63488 00:11:55.691 }, 00:11:55.691 { 00:11:55.691 "name": "BaseBdev3", 00:11:55.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.691 "is_configured": false, 00:11:55.691 "data_offset": 0, 00:11:55.691 "data_size": 0 00:11:55.691 } 00:11:55.691 ] 00:11:55.691 }' 00:11:55.691 07:58:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:55.691 07:58:01 -- common/autotest_common.sh@10 -- # set +x 00:11:56.258 07:58:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:56.258 BaseBdev3 00:11:56.258 [2024-07-13 07:58:01.981667] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:56.258 [2024-07-13 07:58:01.981796] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000027c80 00:11:56.258 [2024-07-13 07:58:01.981808] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:56.258 [2024-07-13 07:58:01.981869] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:11:56.258 [2024-07-13 07:58:01.982037] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000027c80 00:11:56.258 [2024-07-13 07:58:01.982047] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000027c80 00:11:56.258 [2024-07-13 07:58:01.982100] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.258 07:58:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:11:56.258 07:58:01 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:11:56.258 07:58:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:56.258 07:58:01 -- common/autotest_common.sh@889 -- # local i 00:11:56.258 07:58:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:56.258 07:58:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:56.258 07:58:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:56.516 07:58:02 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:56.516 [ 00:11:56.516 { 00:11:56.516 "name": "BaseBdev3", 00:11:56.516 "aliases": [ 00:11:56.516 "06baf8ab-8d5d-4fef-ad78-83c847bc638b" 00:11:56.516 ], 00:11:56.516 "product_name": "Malloc disk", 00:11:56.516 "block_size": 512, 00:11:56.516 "num_blocks": 65536, 00:11:56.516 "uuid": "06baf8ab-8d5d-4fef-ad78-83c847bc638b", 00:11:56.516 "assigned_rate_limits": { 00:11:56.516 "rw_ios_per_sec": 0, 00:11:56.516 "rw_mbytes_per_sec": 0, 00:11:56.516 "r_mbytes_per_sec": 0, 00:11:56.516 "w_mbytes_per_sec": 0 00:11:56.516 }, 00:11:56.516 "claimed": true, 00:11:56.516 "claim_type": "exclusive_write", 00:11:56.516 "zoned": false, 00:11:56.516 "supported_io_types": { 00:11:56.516 "read": true, 00:11:56.516 "write": true, 00:11:56.516 "unmap": true, 00:11:56.516 "write_zeroes": true, 00:11:56.516 "flush": true, 00:11:56.516 "reset": true, 00:11:56.516 "compare": false, 00:11:56.516 "compare_and_write": false, 00:11:56.516 "abort": true, 00:11:56.516 "nvme_admin": false, 00:11:56.516 "nvme_io": false 00:11:56.516 }, 00:11:56.516 "memory_domains": [ 00:11:56.516 { 00:11:56.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.516 "dma_device_type": 2 00:11:56.516 } 00:11:56.516 ], 00:11:56.516 "driver_specific": {} 00:11:56.516 } 00:11:56.516 ] 00:11:56.516 07:58:02 -- common/autotest_common.sh@895 -- # return 0 00:11:56.516 07:58:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:56.516 07:58:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:56.516 07:58:02 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:56.516 07:58:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:56.516 07:58:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:56.516 07:58:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:56.516 07:58:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:56.516 07:58:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:56.516 07:58:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:56.516 07:58:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:56.516 07:58:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:56.516 07:58:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:56.516 07:58:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.516 07:58:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:56.775 07:58:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:56.775 "name": "Existed_Raid", 00:11:56.775 "uuid": "7d537f5b-1cae-4a62-b102-4b803cb595fc", 00:11:56.775 "strip_size_kb": 64, 00:11:56.775 "state": "online", 00:11:56.775 "raid_level": "raid0", 00:11:56.775 "superblock": true, 00:11:56.775 "num_base_bdevs": 3, 00:11:56.775 "num_base_bdevs_discovered": 3, 00:11:56.775 "num_base_bdevs_operational": 3, 00:11:56.775 "base_bdevs_list": [ 00:11:56.775 { 00:11:56.775 "name": "BaseBdev1", 00:11:56.775 "uuid": "2e49f2b2-5c48-4333-8a46-48721249c696", 00:11:56.775 "is_configured": true, 00:11:56.775 "data_offset": 2048, 00:11:56.775 "data_size": 63488 00:11:56.775 }, 00:11:56.775 { 00:11:56.775 "name": "BaseBdev2", 00:11:56.775 "uuid": "14463e88-e691-4f0e-870e-ff50494b43b8", 00:11:56.775 "is_configured": true, 00:11:56.775 "data_offset": 2048, 00:11:56.775 "data_size": 63488 00:11:56.775 }, 00:11:56.775 { 00:11:56.775 "name": "BaseBdev3", 00:11:56.775 "uuid": "06baf8ab-8d5d-4fef-ad78-83c847bc638b", 00:11:56.775 "is_configured": true, 00:11:56.775 "data_offset": 2048, 00:11:56.775 "data_size": 63488 00:11:56.775 } 00:11:56.775 ] 00:11:56.775 }' 00:11:56.775 07:58:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:56.775 07:58:02 -- common/autotest_common.sh@10 -- # set +x 00:11:57.339 07:58:03 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:57.598 [2024-07-13 07:58:03.169989] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:57.598 [2024-07-13 07:58:03.170038] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.598 [2024-07-13 07:58:03.170096] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.598 07:58:03 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:11:57.598 07:58:03 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:11:57.598 07:58:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:11:57.598 07:58:03 -- bdev/bdev_raid.sh@197 -- # return 1 00:11:57.598 07:58:03 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:11:57.598 07:58:03 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:57.598 07:58:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:57.598 07:58:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:11:57.598 07:58:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:57.598 07:58:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:57.598 07:58:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:57.598 07:58:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:57.598 07:58:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:57.598 07:58:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:57.598 07:58:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:57.598 07:58:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:57.598 07:58:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.857 07:58:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:57.857 "name": "Existed_Raid", 00:11:57.857 "uuid": "7d537f5b-1cae-4a62-b102-4b803cb595fc", 00:11:57.857 "strip_size_kb": 64, 00:11:57.857 "state": "offline", 00:11:57.857 "raid_level": "raid0", 00:11:57.857 "superblock": true, 00:11:57.857 "num_base_bdevs": 3, 00:11:57.857 "num_base_bdevs_discovered": 2, 00:11:57.857 "num_base_bdevs_operational": 2, 00:11:57.857 "base_bdevs_list": [ 00:11:57.857 { 00:11:57.857 "name": null, 00:11:57.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.857 "is_configured": false, 00:11:57.857 "data_offset": 2048, 00:11:57.857 "data_size": 63488 00:11:57.857 }, 00:11:57.857 { 00:11:57.857 "name": "BaseBdev2", 00:11:57.857 "uuid": "14463e88-e691-4f0e-870e-ff50494b43b8", 00:11:57.857 "is_configured": true, 00:11:57.857 "data_offset": 2048, 00:11:57.857 "data_size": 63488 00:11:57.857 }, 00:11:57.857 { 00:11:57.857 "name": "BaseBdev3", 00:11:57.857 "uuid": "06baf8ab-8d5d-4fef-ad78-83c847bc638b", 00:11:57.857 "is_configured": true, 00:11:57.857 "data_offset": 2048, 00:11:57.857 "data_size": 63488 00:11:57.857 } 00:11:57.857 ] 00:11:57.857 }' 00:11:57.857 07:58:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:57.857 07:58:03 -- common/autotest_common.sh@10 -- # set +x 00:11:58.422 07:58:04 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:11:58.423 07:58:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:58.423 07:58:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:58.423 07:58:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.423 07:58:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:58.423 07:58:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:58.423 07:58:04 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:58.681 [2024-07-13 07:58:04.353800] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:58.681 07:58:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:58.681 07:58:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:58.681 07:58:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.681 07:58:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:58.940 07:58:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:58.940 07:58:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:58.940 07:58:04 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:58.940 [2024-07-13 07:58:04.743891] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:58.940 [2024-07-13 07:58:04.743960] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027c80 name Existed_Raid, state offline 00:11:59.200 07:58:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:59.200 07:58:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:59.200 07:58:04 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:11:59.200 07:58:04 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:59.200 07:58:04 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:11:59.200 07:58:04 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:11:59.200 07:58:04 -- bdev/bdev_raid.sh@287 -- # killprocess 60511 00:11:59.200 07:58:04 -- common/autotest_common.sh@926 -- # '[' -z 60511 ']' 00:11:59.200 07:58:04 -- common/autotest_common.sh@930 -- # kill -0 60511 00:11:59.200 07:58:04 -- common/autotest_common.sh@931 -- # uname 00:11:59.200 07:58:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:59.200 07:58:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60511 00:11:59.200 07:58:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:59.200 07:58:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:59.200 killing process with pid 60511 00:11:59.200 07:58:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60511' 00:11:59.200 07:58:04 -- common/autotest_common.sh@945 -- # kill 60511 00:11:59.200 07:58:04 -- common/autotest_common.sh@950 -- # wait 60511 00:11:59.200 [2024-07-13 07:58:04.949861] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:59.200 [2024-07-13 07:58:04.949931] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:59.459 07:58:05 -- bdev/bdev_raid.sh@289 -- # return 0 00:11:59.459 ************************************ 00:11:59.459 END TEST raid_state_function_test_sb 00:11:59.459 ************************************ 00:11:59.459 00:11:59.459 real 0m9.971s 00:11:59.459 user 0m18.046s 00:11:59.459 sys 0m1.375s 00:11:59.459 07:58:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:59.459 07:58:05 -- common/autotest_common.sh@10 -- # set +x 00:11:59.459 07:58:05 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:11:59.459 07:58:05 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:59.459 07:58:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:59.459 07:58:05 -- common/autotest_common.sh@10 -- # set +x 00:11:59.718 ************************************ 00:11:59.718 START TEST raid_superblock_test 00:11:59.718 ************************************ 00:11:59.718 07:58:05 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 3 00:11:59.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:59.718 07:58:05 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:11:59.718 07:58:05 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:11:59.718 07:58:05 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:11:59.718 07:58:05 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:11:59.718 07:58:05 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:11:59.718 07:58:05 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:11:59.718 07:58:05 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:11:59.718 07:58:05 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:11:59.718 07:58:05 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:11:59.718 07:58:05 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:11:59.718 07:58:05 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:11:59.718 07:58:05 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:11:59.718 07:58:05 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:11:59.718 07:58:05 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:11:59.718 07:58:05 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:11:59.718 07:58:05 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:11:59.718 07:58:05 -- bdev/bdev_raid.sh@357 -- # raid_pid=60866 00:11:59.718 07:58:05 -- bdev/bdev_raid.sh@358 -- # waitforlisten 60866 /var/tmp/spdk-raid.sock 00:11:59.718 07:58:05 -- common/autotest_common.sh@819 -- # '[' -z 60866 ']' 00:11:59.718 07:58:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:59.718 07:58:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:59.718 07:58:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:59.718 07:58:05 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:59.718 07:58:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:59.718 07:58:05 -- common/autotest_common.sh@10 -- # set +x 00:11:59.718 [2024-07-13 07:58:05.417278] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:59.718 [2024-07-13 07:58:05.417494] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60866 ] 00:11:59.977 [2024-07-13 07:58:05.552377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.977 [2024-07-13 07:58:05.626914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.977 [2024-07-13 07:58:05.716607] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.546 07:58:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:00.546 07:58:06 -- common/autotest_common.sh@852 -- # return 0 00:12:00.546 07:58:06 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:12:00.546 07:58:06 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:00.546 07:58:06 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:12:00.546 07:58:06 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:12:00.546 07:58:06 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:00.546 07:58:06 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:00.546 07:58:06 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:12:00.546 07:58:06 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:00.546 07:58:06 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:00.546 malloc1 00:12:00.546 07:58:06 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:00.822 [2024-07-13 07:58:06.476007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:00.822 [2024-07-13 07:58:06.476124] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.822 [2024-07-13 07:58:06.476186] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000026180 00:12:00.822 [2024-07-13 07:58:06.476232] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.822 pt1 00:12:00.822 [2024-07-13 07:58:06.478142] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.822 [2024-07-13 07:58:06.478191] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:00.822 07:58:06 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:12:00.822 07:58:06 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:00.822 07:58:06 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:12:00.822 07:58:06 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:12:00.822 07:58:06 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:00.822 07:58:06 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:00.822 07:58:06 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:12:00.822 07:58:06 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:00.822 07:58:06 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:01.090 malloc2 00:12:01.090 07:58:06 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:01.090 [2024-07-13 07:58:06.780234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:01.090 [2024-07-13 07:58:06.780343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.090 [2024-07-13 07:58:06.780404] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027f80 00:12:01.090 [2024-07-13 07:58:06.780447] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.090 [2024-07-13 07:58:06.782360] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.090 [2024-07-13 07:58:06.782421] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:01.090 pt2 00:12:01.090 07:58:06 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:12:01.090 07:58:06 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:01.090 07:58:06 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:12:01.090 07:58:06 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:12:01.090 07:58:06 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:01.090 07:58:06 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:01.090 07:58:06 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:12:01.090 07:58:06 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:01.090 07:58:06 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:12:01.349 malloc3 00:12:01.349 07:58:06 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:01.349 [2024-07-13 07:58:07.085370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:01.349 [2024-07-13 07:58:07.085439] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.349 [2024-07-13 07:58:07.085665] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029d80 00:12:01.349 [2024-07-13 07:58:07.085739] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.349 [2024-07-13 07:58:07.087272] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.349 [2024-07-13 07:58:07.087314] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:01.349 pt3 00:12:01.349 07:58:07 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:12:01.349 07:58:07 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:01.349 07:58:07 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:12:01.608 [2024-07-13 07:58:07.233454] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:01.608 [2024-07-13 07:58:07.234851] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:01.608 [2024-07-13 07:58:07.234900] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:01.608 [2024-07-13 07:58:07.234994] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002b280 00:12:01.608 [2024-07-13 07:58:07.235003] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:01.608 [2024-07-13 07:58:07.235087] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:12:01.608 [2024-07-13 07:58:07.235302] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002b280 00:12:01.608 [2024-07-13 07:58:07.235312] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002b280 00:12:01.608 [2024-07-13 07:58:07.235380] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.608 07:58:07 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:01.608 07:58:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:01.608 07:58:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:01.608 07:58:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:01.608 07:58:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:01.608 07:58:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:01.608 07:58:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:01.608 07:58:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:01.608 07:58:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:01.608 07:58:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:01.608 07:58:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:01.608 07:58:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.867 07:58:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:01.867 "name": "raid_bdev1", 00:12:01.867 "uuid": "52cac98d-e639-47a3-9b9c-ebfd141893da", 00:12:01.867 "strip_size_kb": 64, 00:12:01.867 "state": "online", 00:12:01.867 "raid_level": "raid0", 00:12:01.867 "superblock": true, 00:12:01.867 "num_base_bdevs": 3, 00:12:01.867 "num_base_bdevs_discovered": 3, 00:12:01.867 "num_base_bdevs_operational": 3, 00:12:01.867 "base_bdevs_list": [ 00:12:01.867 { 00:12:01.867 "name": "pt1", 00:12:01.867 "uuid": "77b24e7b-5159-5c55-b933-b86d4f9c0d3d", 00:12:01.867 "is_configured": true, 00:12:01.867 "data_offset": 2048, 00:12:01.867 "data_size": 63488 00:12:01.868 }, 00:12:01.868 { 00:12:01.868 "name": "pt2", 00:12:01.868 "uuid": "4027d9aa-6db6-5b99-a8e0-109f749d9b9c", 00:12:01.868 "is_configured": true, 00:12:01.868 "data_offset": 2048, 00:12:01.868 "data_size": 63488 00:12:01.868 }, 00:12:01.868 { 00:12:01.868 "name": "pt3", 00:12:01.868 "uuid": "d7519950-474d-5a80-bef2-04ec91553a24", 00:12:01.868 "is_configured": true, 00:12:01.868 "data_offset": 2048, 00:12:01.868 "data_size": 63488 00:12:01.868 } 00:12:01.868 ] 00:12:01.868 }' 00:12:01.868 07:58:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:01.868 07:58:07 -- common/autotest_common.sh@10 -- # set +x 00:12:02.436 07:58:08 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:02.436 07:58:08 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:12:02.436 [2024-07-13 07:58:08.221741] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:02.436 07:58:08 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=52cac98d-e639-47a3-9b9c-ebfd141893da 00:12:02.436 07:58:08 -- bdev/bdev_raid.sh@380 -- # '[' -z 52cac98d-e639-47a3-9b9c-ebfd141893da ']' 00:12:02.436 07:58:08 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:02.696 [2024-07-13 07:58:08.381543] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:02.696 [2024-07-13 07:58:08.381585] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:02.696 [2024-07-13 07:58:08.381687] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.696 [2024-07-13 07:58:08.381740] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.696 [2024-07-13 07:58:08.381750] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002b280 name raid_bdev1, state offline 00:12:02.696 07:58:08 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:12:02.696 07:58:08 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.955 07:58:08 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:12:02.955 07:58:08 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:12:02.955 07:58:08 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:12:02.955 07:58:08 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:02.955 07:58:08 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:12:02.955 07:58:08 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:03.215 07:58:08 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:12:03.215 07:58:08 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:03.215 07:58:09 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:03.215 07:58:09 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:03.474 07:58:09 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:12:03.474 07:58:09 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:03.474 07:58:09 -- common/autotest_common.sh@640 -- # local es=0 00:12:03.474 07:58:09 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:03.474 07:58:09 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:03.474 07:58:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:03.474 07:58:09 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:03.474 07:58:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:03.474 07:58:09 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:03.474 07:58:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:03.474 07:58:09 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:03.474 07:58:09 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:03.475 07:58:09 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:03.734 [2024-07-13 07:58:09.369674] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:03.734 [2024-07-13 07:58:09.371612] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:03.734 [2024-07-13 07:58:09.371649] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:03.734 [2024-07-13 07:58:09.371681] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:12:03.734 [2024-07-13 07:58:09.371763] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:12:03.734 [2024-07-13 07:58:09.371789] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:12:03.734 [2024-07-13 07:58:09.371833] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:03.734 [2024-07-13 07:58:09.371844] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002b880 name raid_bdev1, state configuring 00:12:03.734 request: 00:12:03.734 { 00:12:03.734 "name": "raid_bdev1", 00:12:03.734 "raid_level": "raid0", 00:12:03.734 "base_bdevs": [ 00:12:03.734 "malloc1", 00:12:03.734 "malloc2", 00:12:03.734 "malloc3" 00:12:03.734 ], 00:12:03.734 "superblock": false, 00:12:03.734 "strip_size_kb": 64, 00:12:03.734 "method": "bdev_raid_create", 00:12:03.734 "req_id": 1 00:12:03.734 } 00:12:03.734 Got JSON-RPC error response 00:12:03.734 response: 00:12:03.734 { 00:12:03.734 "code": -17, 00:12:03.734 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:03.734 } 00:12:03.734 07:58:09 -- common/autotest_common.sh@643 -- # es=1 00:12:03.734 07:58:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:03.734 07:58:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:03.734 07:58:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:03.734 07:58:09 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:12:03.734 07:58:09 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:03.734 07:58:09 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:12:03.734 07:58:09 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:12:03.734 07:58:09 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:03.994 [2024-07-13 07:58:09.665659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:03.994 [2024-07-13 07:58:09.665739] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.994 [2024-07-13 07:58:09.665789] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ca80 00:12:03.994 [2024-07-13 07:58:09.665819] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.994 [2024-07-13 07:58:09.667846] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.994 [2024-07-13 07:58:09.667887] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:03.994 [2024-07-13 07:58:09.667973] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:12:03.994 [2024-07-13 07:58:09.668042] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:03.994 pt1 00:12:03.994 07:58:09 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:03.994 07:58:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:03.994 07:58:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:03.994 07:58:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:03.994 07:58:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:03.994 07:58:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:03.994 07:58:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:03.994 07:58:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:03.994 07:58:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:03.994 07:58:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:03.994 07:58:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:03.994 07:58:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.254 07:58:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:04.254 "name": "raid_bdev1", 00:12:04.254 "uuid": "52cac98d-e639-47a3-9b9c-ebfd141893da", 00:12:04.254 "strip_size_kb": 64, 00:12:04.254 "state": "configuring", 00:12:04.254 "raid_level": "raid0", 00:12:04.254 "superblock": true, 00:12:04.254 "num_base_bdevs": 3, 00:12:04.254 "num_base_bdevs_discovered": 1, 00:12:04.254 "num_base_bdevs_operational": 3, 00:12:04.254 "base_bdevs_list": [ 00:12:04.254 { 00:12:04.254 "name": "pt1", 00:12:04.254 "uuid": "77b24e7b-5159-5c55-b933-b86d4f9c0d3d", 00:12:04.254 "is_configured": true, 00:12:04.254 "data_offset": 2048, 00:12:04.254 "data_size": 63488 00:12:04.254 }, 00:12:04.254 { 00:12:04.254 "name": null, 00:12:04.254 "uuid": "4027d9aa-6db6-5b99-a8e0-109f749d9b9c", 00:12:04.254 "is_configured": false, 00:12:04.254 "data_offset": 2048, 00:12:04.254 "data_size": 63488 00:12:04.254 }, 00:12:04.254 { 00:12:04.254 "name": null, 00:12:04.254 "uuid": "d7519950-474d-5a80-bef2-04ec91553a24", 00:12:04.254 "is_configured": false, 00:12:04.254 "data_offset": 2048, 00:12:04.254 "data_size": 63488 00:12:04.254 } 00:12:04.254 ] 00:12:04.254 }' 00:12:04.254 07:58:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:04.254 07:58:09 -- common/autotest_common.sh@10 -- # set +x 00:12:04.823 07:58:10 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:12:04.823 07:58:10 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:05.081 [2024-07-13 07:58:10.729799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:05.082 [2024-07-13 07:58:10.729881] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.082 [2024-07-13 07:58:10.729935] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002e580 00:12:05.082 [2024-07-13 07:58:10.729958] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.082 [2024-07-13 07:58:10.730254] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.082 [2024-07-13 07:58:10.730280] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:05.082 [2024-07-13 07:58:10.730358] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:12:05.082 [2024-07-13 07:58:10.730379] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:05.082 pt2 00:12:05.082 07:58:10 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:05.340 [2024-07-13 07:58:10.937834] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:05.340 07:58:10 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:05.340 07:58:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:05.340 07:58:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:05.340 07:58:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:05.340 07:58:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:05.340 07:58:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:05.340 07:58:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:05.340 07:58:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:05.340 07:58:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:05.340 07:58:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:05.340 07:58:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.340 07:58:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.599 07:58:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:05.599 "name": "raid_bdev1", 00:12:05.599 "uuid": "52cac98d-e639-47a3-9b9c-ebfd141893da", 00:12:05.599 "strip_size_kb": 64, 00:12:05.599 "state": "configuring", 00:12:05.599 "raid_level": "raid0", 00:12:05.599 "superblock": true, 00:12:05.599 "num_base_bdevs": 3, 00:12:05.599 "num_base_bdevs_discovered": 1, 00:12:05.599 "num_base_bdevs_operational": 3, 00:12:05.599 "base_bdevs_list": [ 00:12:05.599 { 00:12:05.599 "name": "pt1", 00:12:05.599 "uuid": "77b24e7b-5159-5c55-b933-b86d4f9c0d3d", 00:12:05.599 "is_configured": true, 00:12:05.599 "data_offset": 2048, 00:12:05.599 "data_size": 63488 00:12:05.599 }, 00:12:05.599 { 00:12:05.599 "name": null, 00:12:05.599 "uuid": "4027d9aa-6db6-5b99-a8e0-109f749d9b9c", 00:12:05.599 "is_configured": false, 00:12:05.599 "data_offset": 2048, 00:12:05.599 "data_size": 63488 00:12:05.599 }, 00:12:05.599 { 00:12:05.599 "name": null, 00:12:05.599 "uuid": "d7519950-474d-5a80-bef2-04ec91553a24", 00:12:05.599 "is_configured": false, 00:12:05.599 "data_offset": 2048, 00:12:05.599 "data_size": 63488 00:12:05.599 } 00:12:05.599 ] 00:12:05.599 }' 00:12:05.599 07:58:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:05.599 07:58:11 -- common/autotest_common.sh@10 -- # set +x 00:12:06.165 07:58:11 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:12:06.165 07:58:11 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:12:06.165 07:58:11 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:06.165 [2024-07-13 07:58:11.913896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:06.165 [2024-07-13 07:58:11.913981] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.165 [2024-07-13 07:58:11.914042] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002fa80 00:12:06.165 [2024-07-13 07:58:11.914066] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.165 [2024-07-13 07:58:11.914336] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.165 [2024-07-13 07:58:11.914362] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:06.165 [2024-07-13 07:58:11.914433] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:12:06.165 [2024-07-13 07:58:11.914451] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:06.165 pt2 00:12:06.165 07:58:11 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:12:06.165 07:58:11 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:12:06.165 07:58:11 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:06.424 [2024-07-13 07:58:12.121916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:06.424 [2024-07-13 07:58:12.121983] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.424 [2024-07-13 07:58:12.122017] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031280 00:12:06.424 [2024-07-13 07:58:12.122045] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.424 [2024-07-13 07:58:12.122437] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.424 [2024-07-13 07:58:12.122491] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:06.424 [2024-07-13 07:58:12.122556] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:12:06.424 [2024-07-13 07:58:12.122582] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:06.424 [2024-07-13 07:58:12.122637] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002df80 00:12:06.424 [2024-07-13 07:58:12.122645] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:06.424 [2024-07-13 07:58:12.122700] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:06.424 [2024-07-13 07:58:12.122859] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002df80 00:12:06.424 [2024-07-13 07:58:12.122869] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002df80 00:12:06.424 [2024-07-13 07:58:12.122917] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.424 pt3 00:12:06.424 07:58:12 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:12:06.424 07:58:12 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:12:06.424 07:58:12 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:06.424 07:58:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:06.424 07:58:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:06.424 07:58:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:06.424 07:58:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:06.424 07:58:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:06.424 07:58:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:06.424 07:58:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:06.424 07:58:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:06.424 07:58:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:06.424 07:58:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.424 07:58:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:06.683 07:58:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:06.683 "name": "raid_bdev1", 00:12:06.683 "uuid": "52cac98d-e639-47a3-9b9c-ebfd141893da", 00:12:06.683 "strip_size_kb": 64, 00:12:06.683 "state": "online", 00:12:06.683 "raid_level": "raid0", 00:12:06.683 "superblock": true, 00:12:06.683 "num_base_bdevs": 3, 00:12:06.683 "num_base_bdevs_discovered": 3, 00:12:06.683 "num_base_bdevs_operational": 3, 00:12:06.683 "base_bdevs_list": [ 00:12:06.683 { 00:12:06.683 "name": "pt1", 00:12:06.683 "uuid": "77b24e7b-5159-5c55-b933-b86d4f9c0d3d", 00:12:06.683 "is_configured": true, 00:12:06.683 "data_offset": 2048, 00:12:06.683 "data_size": 63488 00:12:06.683 }, 00:12:06.683 { 00:12:06.683 "name": "pt2", 00:12:06.683 "uuid": "4027d9aa-6db6-5b99-a8e0-109f749d9b9c", 00:12:06.683 "is_configured": true, 00:12:06.683 "data_offset": 2048, 00:12:06.683 "data_size": 63488 00:12:06.683 }, 00:12:06.683 { 00:12:06.683 "name": "pt3", 00:12:06.683 "uuid": "d7519950-474d-5a80-bef2-04ec91553a24", 00:12:06.683 "is_configured": true, 00:12:06.683 "data_offset": 2048, 00:12:06.683 "data_size": 63488 00:12:06.683 } 00:12:06.683 ] 00:12:06.683 }' 00:12:06.683 07:58:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:06.683 07:58:12 -- common/autotest_common.sh@10 -- # set +x 00:12:07.274 07:58:12 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:07.274 07:58:12 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:12:07.542 [2024-07-13 07:58:13.134206] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:07.542 07:58:13 -- bdev/bdev_raid.sh@430 -- # '[' 52cac98d-e639-47a3-9b9c-ebfd141893da '!=' 52cac98d-e639-47a3-9b9c-ebfd141893da ']' 00:12:07.542 07:58:13 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:12:07.542 07:58:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:12:07.542 07:58:13 -- bdev/bdev_raid.sh@197 -- # return 1 00:12:07.542 07:58:13 -- bdev/bdev_raid.sh@511 -- # killprocess 60866 00:12:07.542 07:58:13 -- common/autotest_common.sh@926 -- # '[' -z 60866 ']' 00:12:07.542 07:58:13 -- common/autotest_common.sh@930 -- # kill -0 60866 00:12:07.542 07:58:13 -- common/autotest_common.sh@931 -- # uname 00:12:07.542 07:58:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:07.542 07:58:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60866 00:12:07.542 killing process with pid 60866 00:12:07.542 07:58:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:07.542 07:58:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:07.542 07:58:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60866' 00:12:07.542 07:58:13 -- common/autotest_common.sh@945 -- # kill 60866 00:12:07.542 07:58:13 -- common/autotest_common.sh@950 -- # wait 60866 00:12:07.542 [2024-07-13 07:58:13.177926] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:07.542 [2024-07-13 07:58:13.177999] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:07.542 [2024-07-13 07:58:13.178037] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:07.542 [2024-07-13 07:58:13.178046] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002df80 name raid_bdev1, state offline 00:12:07.542 [2024-07-13 07:58:13.207148] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:07.807 ************************************ 00:12:07.807 END TEST raid_superblock_test 00:12:07.807 ************************************ 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@513 -- # return 0 00:12:07.807 00:12:07.807 real 0m8.128s 00:12:07.807 user 0m14.502s 00:12:07.807 sys 0m1.244s 00:12:07.807 07:58:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.807 07:58:13 -- common/autotest_common.sh@10 -- # set +x 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:12:07.807 07:58:13 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:07.807 07:58:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:07.807 07:58:13 -- common/autotest_common.sh@10 -- # set +x 00:12:07.807 ************************************ 00:12:07.807 START TEST raid_state_function_test 00:12:07.807 ************************************ 00:12:07.807 07:58:13 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 false 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:12:07.807 Process raid pid: 61149 00:12:07.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@226 -- # raid_pid=61149 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 61149' 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@228 -- # waitforlisten 61149 /var/tmp/spdk-raid.sock 00:12:07.807 07:58:13 -- common/autotest_common.sh@819 -- # '[' -z 61149 ']' 00:12:07.807 07:58:13 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:07.807 07:58:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:07.807 07:58:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:07.807 07:58:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:07.807 07:58:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:07.807 07:58:13 -- common/autotest_common.sh@10 -- # set +x 00:12:07.807 [2024-07-13 07:58:13.602939] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:07.807 [2024-07-13 07:58:13.603105] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.065 [2024-07-13 07:58:13.737187] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.065 [2024-07-13 07:58:13.784377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.065 [2024-07-13 07:58:13.828927] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.630 07:58:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:08.630 07:58:14 -- common/autotest_common.sh@852 -- # return 0 00:12:08.630 07:58:14 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:08.888 [2024-07-13 07:58:14.523416] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:08.888 [2024-07-13 07:58:14.523702] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:08.888 [2024-07-13 07:58:14.523724] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:08.889 [2024-07-13 07:58:14.523749] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:08.889 [2024-07-13 07:58:14.523757] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:08.889 [2024-07-13 07:58:14.523793] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:08.889 07:58:14 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:08.889 07:58:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:08.889 07:58:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:08.889 07:58:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:08.889 07:58:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:08.889 07:58:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:08.889 07:58:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:08.889 07:58:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:08.889 07:58:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:08.889 07:58:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:08.889 07:58:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:08.889 07:58:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.147 07:58:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:09.147 "name": "Existed_Raid", 00:12:09.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.147 "strip_size_kb": 64, 00:12:09.147 "state": "configuring", 00:12:09.147 "raid_level": "concat", 00:12:09.147 "superblock": false, 00:12:09.147 "num_base_bdevs": 3, 00:12:09.147 "num_base_bdevs_discovered": 0, 00:12:09.147 "num_base_bdevs_operational": 3, 00:12:09.147 "base_bdevs_list": [ 00:12:09.147 { 00:12:09.147 "name": "BaseBdev1", 00:12:09.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.147 "is_configured": false, 00:12:09.147 "data_offset": 0, 00:12:09.147 "data_size": 0 00:12:09.147 }, 00:12:09.147 { 00:12:09.147 "name": "BaseBdev2", 00:12:09.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.147 "is_configured": false, 00:12:09.147 "data_offset": 0, 00:12:09.147 "data_size": 0 00:12:09.147 }, 00:12:09.147 { 00:12:09.147 "name": "BaseBdev3", 00:12:09.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.147 "is_configured": false, 00:12:09.147 "data_offset": 0, 00:12:09.147 "data_size": 0 00:12:09.147 } 00:12:09.147 ] 00:12:09.147 }' 00:12:09.147 07:58:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:09.147 07:58:14 -- common/autotest_common.sh@10 -- # set +x 00:12:09.713 07:58:15 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:09.714 [2024-07-13 07:58:15.492138] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:09.714 [2024-07-13 07:58:15.492181] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000025880 name Existed_Raid, state configuring 00:12:09.714 07:58:15 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:09.972 [2024-07-13 07:58:15.648315] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:09.972 [2024-07-13 07:58:15.648442] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:09.972 [2024-07-13 07:58:15.648485] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:09.973 [2024-07-13 07:58:15.648521] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:09.973 [2024-07-13 07:58:15.648542] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:09.973 [2024-07-13 07:58:15.648592] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:09.973 07:58:15 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:10.232 [2024-07-13 07:58:15.815357] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.232 BaseBdev1 00:12:10.232 07:58:15 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:12:10.232 07:58:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:12:10.232 07:58:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:10.232 07:58:15 -- common/autotest_common.sh@889 -- # local i 00:12:10.232 07:58:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:10.232 07:58:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:10.232 07:58:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:10.232 07:58:16 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:10.490 [ 00:12:10.490 { 00:12:10.490 "name": "BaseBdev1", 00:12:10.490 "aliases": [ 00:12:10.490 "020ad7bf-33c8-4c56-a116-76ce7d2d9a33" 00:12:10.490 ], 00:12:10.490 "product_name": "Malloc disk", 00:12:10.490 "block_size": 512, 00:12:10.490 "num_blocks": 65536, 00:12:10.490 "uuid": "020ad7bf-33c8-4c56-a116-76ce7d2d9a33", 00:12:10.490 "assigned_rate_limits": { 00:12:10.490 "rw_ios_per_sec": 0, 00:12:10.490 "rw_mbytes_per_sec": 0, 00:12:10.490 "r_mbytes_per_sec": 0, 00:12:10.490 "w_mbytes_per_sec": 0 00:12:10.490 }, 00:12:10.490 "claimed": true, 00:12:10.490 "claim_type": "exclusive_write", 00:12:10.491 "zoned": false, 00:12:10.491 "supported_io_types": { 00:12:10.491 "read": true, 00:12:10.491 "write": true, 00:12:10.491 "unmap": true, 00:12:10.491 "write_zeroes": true, 00:12:10.491 "flush": true, 00:12:10.491 "reset": true, 00:12:10.491 "compare": false, 00:12:10.491 "compare_and_write": false, 00:12:10.491 "abort": true, 00:12:10.491 "nvme_admin": false, 00:12:10.491 "nvme_io": false 00:12:10.491 }, 00:12:10.491 "memory_domains": [ 00:12:10.491 { 00:12:10.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.491 "dma_device_type": 2 00:12:10.491 } 00:12:10.491 ], 00:12:10.491 "driver_specific": {} 00:12:10.491 } 00:12:10.491 ] 00:12:10.491 07:58:16 -- common/autotest_common.sh@895 -- # return 0 00:12:10.491 07:58:16 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:10.491 07:58:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:10.491 07:58:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:10.491 07:58:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:10.491 07:58:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:10.491 07:58:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:10.491 07:58:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:10.491 07:58:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:10.491 07:58:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:10.491 07:58:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:10.491 07:58:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.491 07:58:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.749 07:58:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:10.749 "name": "Existed_Raid", 00:12:10.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.749 "strip_size_kb": 64, 00:12:10.749 "state": "configuring", 00:12:10.749 "raid_level": "concat", 00:12:10.749 "superblock": false, 00:12:10.749 "num_base_bdevs": 3, 00:12:10.749 "num_base_bdevs_discovered": 1, 00:12:10.749 "num_base_bdevs_operational": 3, 00:12:10.749 "base_bdevs_list": [ 00:12:10.749 { 00:12:10.749 "name": "BaseBdev1", 00:12:10.749 "uuid": "020ad7bf-33c8-4c56-a116-76ce7d2d9a33", 00:12:10.749 "is_configured": true, 00:12:10.749 "data_offset": 0, 00:12:10.749 "data_size": 65536 00:12:10.749 }, 00:12:10.749 { 00:12:10.749 "name": "BaseBdev2", 00:12:10.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.749 "is_configured": false, 00:12:10.749 "data_offset": 0, 00:12:10.749 "data_size": 0 00:12:10.749 }, 00:12:10.749 { 00:12:10.749 "name": "BaseBdev3", 00:12:10.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.749 "is_configured": false, 00:12:10.749 "data_offset": 0, 00:12:10.749 "data_size": 0 00:12:10.749 } 00:12:10.749 ] 00:12:10.749 }' 00:12:10.749 07:58:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:10.749 07:58:16 -- common/autotest_common.sh@10 -- # set +x 00:12:11.008 07:58:16 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:11.270 [2024-07-13 07:58:16.947619] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:11.270 [2024-07-13 07:58:16.947699] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026480 name Existed_Raid, state configuring 00:12:11.270 07:58:16 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:12:11.270 07:58:16 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:11.535 [2024-07-13 07:58:17.091657] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:11.535 [2024-07-13 07:58:17.093578] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:11.535 [2024-07-13 07:58:17.093635] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:11.535 [2024-07-13 07:58:17.093645] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:11.535 [2024-07-13 07:58:17.093676] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:11.535 07:58:17 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:12:11.535 07:58:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:11.536 07:58:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:11.536 07:58:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:11.536 07:58:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:11.536 07:58:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:11.536 07:58:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:11.536 07:58:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:11.536 07:58:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:11.536 07:58:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:11.536 07:58:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:11.536 07:58:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:11.536 07:58:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.536 07:58:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:11.536 07:58:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:11.536 "name": "Existed_Raid", 00:12:11.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.536 "strip_size_kb": 64, 00:12:11.536 "state": "configuring", 00:12:11.536 "raid_level": "concat", 00:12:11.536 "superblock": false, 00:12:11.536 "num_base_bdevs": 3, 00:12:11.536 "num_base_bdevs_discovered": 1, 00:12:11.536 "num_base_bdevs_operational": 3, 00:12:11.536 "base_bdevs_list": [ 00:12:11.536 { 00:12:11.536 "name": "BaseBdev1", 00:12:11.536 "uuid": "020ad7bf-33c8-4c56-a116-76ce7d2d9a33", 00:12:11.536 "is_configured": true, 00:12:11.536 "data_offset": 0, 00:12:11.536 "data_size": 65536 00:12:11.536 }, 00:12:11.536 { 00:12:11.536 "name": "BaseBdev2", 00:12:11.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.536 "is_configured": false, 00:12:11.536 "data_offset": 0, 00:12:11.536 "data_size": 0 00:12:11.536 }, 00:12:11.536 { 00:12:11.536 "name": "BaseBdev3", 00:12:11.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.536 "is_configured": false, 00:12:11.536 "data_offset": 0, 00:12:11.536 "data_size": 0 00:12:11.536 } 00:12:11.536 ] 00:12:11.536 }' 00:12:11.536 07:58:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:11.536 07:58:17 -- common/autotest_common.sh@10 -- # set +x 00:12:12.473 07:58:17 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:12.473 BaseBdev2 00:12:12.473 [2024-07-13 07:58:18.206581] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:12.473 07:58:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:12:12.473 07:58:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:12:12.473 07:58:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:12.473 07:58:18 -- common/autotest_common.sh@889 -- # local i 00:12:12.473 07:58:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:12.473 07:58:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:12.473 07:58:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:12.732 07:58:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:12.991 [ 00:12:12.991 { 00:12:12.991 "name": "BaseBdev2", 00:12:12.991 "aliases": [ 00:12:12.991 "85f06014-28cb-4a6f-8038-ac4268fea19b" 00:12:12.991 ], 00:12:12.991 "product_name": "Malloc disk", 00:12:12.991 "block_size": 512, 00:12:12.992 "num_blocks": 65536, 00:12:12.992 "uuid": "85f06014-28cb-4a6f-8038-ac4268fea19b", 00:12:12.992 "assigned_rate_limits": { 00:12:12.992 "rw_ios_per_sec": 0, 00:12:12.992 "rw_mbytes_per_sec": 0, 00:12:12.992 "r_mbytes_per_sec": 0, 00:12:12.992 "w_mbytes_per_sec": 0 00:12:12.992 }, 00:12:12.992 "claimed": true, 00:12:12.992 "claim_type": "exclusive_write", 00:12:12.992 "zoned": false, 00:12:12.992 "supported_io_types": { 00:12:12.992 "read": true, 00:12:12.992 "write": true, 00:12:12.992 "unmap": true, 00:12:12.992 "write_zeroes": true, 00:12:12.992 "flush": true, 00:12:12.992 "reset": true, 00:12:12.992 "compare": false, 00:12:12.992 "compare_and_write": false, 00:12:12.992 "abort": true, 00:12:12.992 "nvme_admin": false, 00:12:12.992 "nvme_io": false 00:12:12.992 }, 00:12:12.992 "memory_domains": [ 00:12:12.992 { 00:12:12.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.992 "dma_device_type": 2 00:12:12.992 } 00:12:12.992 ], 00:12:12.992 "driver_specific": {} 00:12:12.992 } 00:12:12.992 ] 00:12:12.992 07:58:18 -- common/autotest_common.sh@895 -- # return 0 00:12:12.992 07:58:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:12.992 07:58:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:12.992 07:58:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:12.992 07:58:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:12.992 07:58:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:12.992 07:58:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:12.992 07:58:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:12.992 07:58:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:12.992 07:58:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:12.992 07:58:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:12.992 07:58:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:12.992 07:58:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:12.992 07:58:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.992 07:58:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:13.251 07:58:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:13.251 "name": "Existed_Raid", 00:12:13.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.251 "strip_size_kb": 64, 00:12:13.251 "state": "configuring", 00:12:13.251 "raid_level": "concat", 00:12:13.251 "superblock": false, 00:12:13.251 "num_base_bdevs": 3, 00:12:13.251 "num_base_bdevs_discovered": 2, 00:12:13.251 "num_base_bdevs_operational": 3, 00:12:13.251 "base_bdevs_list": [ 00:12:13.251 { 00:12:13.251 "name": "BaseBdev1", 00:12:13.251 "uuid": "020ad7bf-33c8-4c56-a116-76ce7d2d9a33", 00:12:13.251 "is_configured": true, 00:12:13.251 "data_offset": 0, 00:12:13.251 "data_size": 65536 00:12:13.251 }, 00:12:13.251 { 00:12:13.251 "name": "BaseBdev2", 00:12:13.251 "uuid": "85f06014-28cb-4a6f-8038-ac4268fea19b", 00:12:13.251 "is_configured": true, 00:12:13.251 "data_offset": 0, 00:12:13.251 "data_size": 65536 00:12:13.251 }, 00:12:13.251 { 00:12:13.251 "name": "BaseBdev3", 00:12:13.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.251 "is_configured": false, 00:12:13.251 "data_offset": 0, 00:12:13.251 "data_size": 0 00:12:13.251 } 00:12:13.251 ] 00:12:13.251 }' 00:12:13.251 07:58:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:13.251 07:58:18 -- common/autotest_common.sh@10 -- # set +x 00:12:13.819 07:58:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:13.819 [2024-07-13 07:58:19.617985] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:13.819 [2024-07-13 07:58:19.618034] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000027680 00:12:13.819 [2024-07-13 07:58:19.618043] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:13.819 [2024-07-13 07:58:19.618114] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:12:13.819 [2024-07-13 07:58:19.618323] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000027680 00:12:13.819 [2024-07-13 07:58:19.618333] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000027680 00:12:13.819 BaseBdev3 00:12:13.819 [2024-07-13 07:58:19.618686] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.078 07:58:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:12:14.078 07:58:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:12:14.078 07:58:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:14.078 07:58:19 -- common/autotest_common.sh@889 -- # local i 00:12:14.078 07:58:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:14.078 07:58:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:14.078 07:58:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:14.078 07:58:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:14.337 [ 00:12:14.337 { 00:12:14.337 "name": "BaseBdev3", 00:12:14.337 "aliases": [ 00:12:14.337 "26d79c82-c375-404f-aeac-f51c721554f0" 00:12:14.337 ], 00:12:14.337 "product_name": "Malloc disk", 00:12:14.337 "block_size": 512, 00:12:14.337 "num_blocks": 65536, 00:12:14.337 "uuid": "26d79c82-c375-404f-aeac-f51c721554f0", 00:12:14.337 "assigned_rate_limits": { 00:12:14.337 "rw_ios_per_sec": 0, 00:12:14.337 "rw_mbytes_per_sec": 0, 00:12:14.337 "r_mbytes_per_sec": 0, 00:12:14.337 "w_mbytes_per_sec": 0 00:12:14.337 }, 00:12:14.337 "claimed": true, 00:12:14.337 "claim_type": "exclusive_write", 00:12:14.337 "zoned": false, 00:12:14.337 "supported_io_types": { 00:12:14.337 "read": true, 00:12:14.337 "write": true, 00:12:14.337 "unmap": true, 00:12:14.337 "write_zeroes": true, 00:12:14.337 "flush": true, 00:12:14.337 "reset": true, 00:12:14.337 "compare": false, 00:12:14.337 "compare_and_write": false, 00:12:14.337 "abort": true, 00:12:14.337 "nvme_admin": false, 00:12:14.337 "nvme_io": false 00:12:14.337 }, 00:12:14.337 "memory_domains": [ 00:12:14.337 { 00:12:14.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.337 "dma_device_type": 2 00:12:14.337 } 00:12:14.337 ], 00:12:14.337 "driver_specific": {} 00:12:14.337 } 00:12:14.337 ] 00:12:14.337 07:58:19 -- common/autotest_common.sh@895 -- # return 0 00:12:14.337 07:58:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:14.337 07:58:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:14.337 07:58:19 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:14.337 07:58:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:14.337 07:58:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:14.337 07:58:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:14.337 07:58:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:14.337 07:58:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:14.337 07:58:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:14.337 07:58:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:14.337 07:58:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:14.337 07:58:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:14.337 07:58:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.337 07:58:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:14.596 07:58:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:14.596 "name": "Existed_Raid", 00:12:14.596 "uuid": "f8b9d73f-10a2-44b1-9220-014b9d2e941b", 00:12:14.596 "strip_size_kb": 64, 00:12:14.596 "state": "online", 00:12:14.596 "raid_level": "concat", 00:12:14.596 "superblock": false, 00:12:14.596 "num_base_bdevs": 3, 00:12:14.596 "num_base_bdevs_discovered": 3, 00:12:14.596 "num_base_bdevs_operational": 3, 00:12:14.596 "base_bdevs_list": [ 00:12:14.596 { 00:12:14.596 "name": "BaseBdev1", 00:12:14.596 "uuid": "020ad7bf-33c8-4c56-a116-76ce7d2d9a33", 00:12:14.596 "is_configured": true, 00:12:14.596 "data_offset": 0, 00:12:14.596 "data_size": 65536 00:12:14.596 }, 00:12:14.596 { 00:12:14.596 "name": "BaseBdev2", 00:12:14.596 "uuid": "85f06014-28cb-4a6f-8038-ac4268fea19b", 00:12:14.596 "is_configured": true, 00:12:14.596 "data_offset": 0, 00:12:14.596 "data_size": 65536 00:12:14.596 }, 00:12:14.596 { 00:12:14.596 "name": "BaseBdev3", 00:12:14.596 "uuid": "26d79c82-c375-404f-aeac-f51c721554f0", 00:12:14.596 "is_configured": true, 00:12:14.596 "data_offset": 0, 00:12:14.596 "data_size": 65536 00:12:14.596 } 00:12:14.596 ] 00:12:14.596 }' 00:12:14.596 07:58:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:14.596 07:58:20 -- common/autotest_common.sh@10 -- # set +x 00:12:15.167 07:58:20 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:15.167 [2024-07-13 07:58:20.910244] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:15.167 [2024-07-13 07:58:20.910278] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.167 [2024-07-13 07:58:20.910324] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.167 07:58:20 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:12:15.167 07:58:20 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:12:15.167 07:58:20 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:12:15.167 07:58:20 -- bdev/bdev_raid.sh@197 -- # return 1 00:12:15.167 07:58:20 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:12:15.167 07:58:20 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:15.167 07:58:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:15.167 07:58:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:12:15.167 07:58:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:15.167 07:58:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:15.167 07:58:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:15.167 07:58:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:15.167 07:58:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:15.167 07:58:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:15.167 07:58:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:15.167 07:58:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:15.167 07:58:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.427 07:58:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:15.427 "name": "Existed_Raid", 00:12:15.427 "uuid": "f8b9d73f-10a2-44b1-9220-014b9d2e941b", 00:12:15.427 "strip_size_kb": 64, 00:12:15.427 "state": "offline", 00:12:15.427 "raid_level": "concat", 00:12:15.427 "superblock": false, 00:12:15.427 "num_base_bdevs": 3, 00:12:15.427 "num_base_bdevs_discovered": 2, 00:12:15.427 "num_base_bdevs_operational": 2, 00:12:15.427 "base_bdevs_list": [ 00:12:15.427 { 00:12:15.427 "name": null, 00:12:15.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.427 "is_configured": false, 00:12:15.427 "data_offset": 0, 00:12:15.427 "data_size": 65536 00:12:15.427 }, 00:12:15.427 { 00:12:15.427 "name": "BaseBdev2", 00:12:15.427 "uuid": "85f06014-28cb-4a6f-8038-ac4268fea19b", 00:12:15.427 "is_configured": true, 00:12:15.427 "data_offset": 0, 00:12:15.427 "data_size": 65536 00:12:15.427 }, 00:12:15.427 { 00:12:15.427 "name": "BaseBdev3", 00:12:15.427 "uuid": "26d79c82-c375-404f-aeac-f51c721554f0", 00:12:15.427 "is_configured": true, 00:12:15.427 "data_offset": 0, 00:12:15.427 "data_size": 65536 00:12:15.427 } 00:12:15.427 ] 00:12:15.427 }' 00:12:15.427 07:58:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:15.427 07:58:21 -- common/autotest_common.sh@10 -- # set +x 00:12:15.995 07:58:21 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:12:15.995 07:58:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:15.995 07:58:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:15.995 07:58:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:16.255 07:58:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:16.255 07:58:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:16.255 07:58:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:16.255 [2024-07-13 07:58:22.052522] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:16.513 07:58:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:16.513 07:58:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:16.513 07:58:22 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:16.513 07:58:22 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:16.513 07:58:22 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:16.513 07:58:22 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:16.513 07:58:22 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:16.771 [2024-07-13 07:58:22.438658] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:16.771 [2024-07-13 07:58:22.438700] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027680 name Existed_Raid, state offline 00:12:16.771 07:58:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:16.771 07:58:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:16.771 07:58:22 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:12:16.771 07:58:22 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:17.030 07:58:22 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:12:17.030 07:58:22 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:12:17.030 07:58:22 -- bdev/bdev_raid.sh@287 -- # killprocess 61149 00:12:17.030 07:58:22 -- common/autotest_common.sh@926 -- # '[' -z 61149 ']' 00:12:17.030 07:58:22 -- common/autotest_common.sh@930 -- # kill -0 61149 00:12:17.030 07:58:22 -- common/autotest_common.sh@931 -- # uname 00:12:17.030 07:58:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:17.030 07:58:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61149 00:12:17.030 killing process with pid 61149 00:12:17.030 07:58:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:17.030 07:58:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:17.030 07:58:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61149' 00:12:17.030 07:58:22 -- common/autotest_common.sh@945 -- # kill 61149 00:12:17.030 07:58:22 -- common/autotest_common.sh@950 -- # wait 61149 00:12:17.030 [2024-07-13 07:58:22.685027] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:17.030 [2024-07-13 07:58:22.685077] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:17.290 ************************************ 00:12:17.290 END TEST raid_state_function_test 00:12:17.290 ************************************ 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@289 -- # return 0 00:12:17.290 00:12:17.290 real 0m9.408s 00:12:17.290 user 0m17.118s 00:12:17.290 sys 0m1.304s 00:12:17.290 07:58:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:17.290 07:58:22 -- common/autotest_common.sh@10 -- # set +x 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:12:17.290 07:58:22 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:17.290 07:58:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:17.290 07:58:22 -- common/autotest_common.sh@10 -- # set +x 00:12:17.290 ************************************ 00:12:17.290 START TEST raid_state_function_test_sb 00:12:17.290 ************************************ 00:12:17.290 07:58:22 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 true 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:12:17.290 Process raid pid: 61503 00:12:17.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@226 -- # raid_pid=61503 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 61503' 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@228 -- # waitforlisten 61503 /var/tmp/spdk-raid.sock 00:12:17.290 07:58:22 -- common/autotest_common.sh@819 -- # '[' -z 61503 ']' 00:12:17.290 07:58:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:17.290 07:58:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:17.290 07:58:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:17.290 07:58:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:17.290 07:58:22 -- common/autotest_common.sh@10 -- # set +x 00:12:17.290 07:58:22 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:17.290 [2024-07-13 07:58:23.066578] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:17.290 [2024-07-13 07:58:23.066749] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.549 [2024-07-13 07:58:23.198227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.549 [2024-07-13 07:58:23.242100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.549 [2024-07-13 07:58:23.286373] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.128 07:58:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:18.128 07:58:23 -- common/autotest_common.sh@852 -- # return 0 00:12:18.128 07:58:23 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:18.401 [2024-07-13 07:58:24.092199] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:18.401 [2024-07-13 07:58:24.092266] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:18.401 [2024-07-13 07:58:24.092278] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:18.401 [2024-07-13 07:58:24.092302] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:18.401 [2024-07-13 07:58:24.092310] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:18.401 [2024-07-13 07:58:24.092356] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:18.401 07:58:24 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:18.401 07:58:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:18.401 07:58:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:18.401 07:58:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:18.401 07:58:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:18.401 07:58:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:18.401 07:58:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:18.401 07:58:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:18.401 07:58:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:18.401 07:58:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:18.401 07:58:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.401 07:58:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.660 07:58:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:18.660 "name": "Existed_Raid", 00:12:18.660 "uuid": "5dd9afa0-929f-4759-af23-6d650897b444", 00:12:18.660 "strip_size_kb": 64, 00:12:18.660 "state": "configuring", 00:12:18.660 "raid_level": "concat", 00:12:18.660 "superblock": true, 00:12:18.660 "num_base_bdevs": 3, 00:12:18.660 "num_base_bdevs_discovered": 0, 00:12:18.660 "num_base_bdevs_operational": 3, 00:12:18.660 "base_bdevs_list": [ 00:12:18.660 { 00:12:18.660 "name": "BaseBdev1", 00:12:18.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.660 "is_configured": false, 00:12:18.660 "data_offset": 0, 00:12:18.660 "data_size": 0 00:12:18.660 }, 00:12:18.660 { 00:12:18.660 "name": "BaseBdev2", 00:12:18.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.660 "is_configured": false, 00:12:18.660 "data_offset": 0, 00:12:18.660 "data_size": 0 00:12:18.660 }, 00:12:18.660 { 00:12:18.660 "name": "BaseBdev3", 00:12:18.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.660 "is_configured": false, 00:12:18.660 "data_offset": 0, 00:12:18.660 "data_size": 0 00:12:18.660 } 00:12:18.660 ] 00:12:18.660 }' 00:12:18.660 07:58:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:18.660 07:58:24 -- common/autotest_common.sh@10 -- # set +x 00:12:19.227 07:58:24 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:19.227 [2024-07-13 07:58:24.880117] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:19.227 [2024-07-13 07:58:24.880151] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000025880 name Existed_Raid, state configuring 00:12:19.227 07:58:24 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:19.227 [2024-07-13 07:58:25.032201] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:19.227 [2024-07-13 07:58:25.032264] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:19.227 [2024-07-13 07:58:25.032274] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:19.227 [2024-07-13 07:58:25.032291] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:19.227 [2024-07-13 07:58:25.032299] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:19.228 [2024-07-13 07:58:25.032322] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:19.485 07:58:25 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:19.485 BaseBdev1 00:12:19.485 [2024-07-13 07:58:25.187519] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:19.485 07:58:25 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:12:19.485 07:58:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:12:19.485 07:58:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:19.485 07:58:25 -- common/autotest_common.sh@889 -- # local i 00:12:19.485 07:58:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:19.485 07:58:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:19.485 07:58:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:19.743 07:58:25 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:19.743 [ 00:12:19.743 { 00:12:19.743 "name": "BaseBdev1", 00:12:19.743 "aliases": [ 00:12:19.743 "c46a0dea-e9e0-4009-8499-913e6cd35b05" 00:12:19.743 ], 00:12:19.743 "product_name": "Malloc disk", 00:12:19.743 "block_size": 512, 00:12:19.743 "num_blocks": 65536, 00:12:19.743 "uuid": "c46a0dea-e9e0-4009-8499-913e6cd35b05", 00:12:19.743 "assigned_rate_limits": { 00:12:19.743 "rw_ios_per_sec": 0, 00:12:19.743 "rw_mbytes_per_sec": 0, 00:12:19.743 "r_mbytes_per_sec": 0, 00:12:19.743 "w_mbytes_per_sec": 0 00:12:19.743 }, 00:12:19.743 "claimed": true, 00:12:19.743 "claim_type": "exclusive_write", 00:12:19.743 "zoned": false, 00:12:19.743 "supported_io_types": { 00:12:19.743 "read": true, 00:12:19.743 "write": true, 00:12:19.743 "unmap": true, 00:12:19.743 "write_zeroes": true, 00:12:19.743 "flush": true, 00:12:19.743 "reset": true, 00:12:19.743 "compare": false, 00:12:19.743 "compare_and_write": false, 00:12:19.743 "abort": true, 00:12:19.743 "nvme_admin": false, 00:12:19.743 "nvme_io": false 00:12:19.743 }, 00:12:19.743 "memory_domains": [ 00:12:19.743 { 00:12:19.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.743 "dma_device_type": 2 00:12:19.743 } 00:12:19.743 ], 00:12:19.743 "driver_specific": {} 00:12:19.743 } 00:12:19.743 ] 00:12:19.743 07:58:25 -- common/autotest_common.sh@895 -- # return 0 00:12:19.743 07:58:25 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:19.743 07:58:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:19.743 07:58:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:19.743 07:58:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:19.743 07:58:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:19.743 07:58:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:19.743 07:58:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:19.743 07:58:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:19.743 07:58:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:19.743 07:58:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:19.743 07:58:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.743 07:58:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:20.001 07:58:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:20.001 "name": "Existed_Raid", 00:12:20.001 "uuid": "d8131ff9-93ae-4690-80c5-88d75a559ca9", 00:12:20.001 "strip_size_kb": 64, 00:12:20.001 "state": "configuring", 00:12:20.001 "raid_level": "concat", 00:12:20.001 "superblock": true, 00:12:20.001 "num_base_bdevs": 3, 00:12:20.001 "num_base_bdevs_discovered": 1, 00:12:20.001 "num_base_bdevs_operational": 3, 00:12:20.001 "base_bdevs_list": [ 00:12:20.001 { 00:12:20.001 "name": "BaseBdev1", 00:12:20.001 "uuid": "c46a0dea-e9e0-4009-8499-913e6cd35b05", 00:12:20.001 "is_configured": true, 00:12:20.001 "data_offset": 2048, 00:12:20.001 "data_size": 63488 00:12:20.001 }, 00:12:20.001 { 00:12:20.001 "name": "BaseBdev2", 00:12:20.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.001 "is_configured": false, 00:12:20.001 "data_offset": 0, 00:12:20.001 "data_size": 0 00:12:20.001 }, 00:12:20.001 { 00:12:20.001 "name": "BaseBdev3", 00:12:20.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.001 "is_configured": false, 00:12:20.001 "data_offset": 0, 00:12:20.001 "data_size": 0 00:12:20.001 } 00:12:20.001 ] 00:12:20.001 }' 00:12:20.001 07:58:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:20.001 07:58:25 -- common/autotest_common.sh@10 -- # set +x 00:12:20.568 07:58:26 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:20.568 [2024-07-13 07:58:26.328548] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:20.568 [2024-07-13 07:58:26.328594] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026480 name Existed_Raid, state configuring 00:12:20.568 07:58:26 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:12:20.568 07:58:26 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:20.826 07:58:26 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:21.085 BaseBdev1 00:12:21.085 07:58:26 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:12:21.085 07:58:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:12:21.085 07:58:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:21.085 07:58:26 -- common/autotest_common.sh@889 -- # local i 00:12:21.085 07:58:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:21.085 07:58:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:21.085 07:58:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:21.085 07:58:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:21.344 [ 00:12:21.344 { 00:12:21.344 "name": "BaseBdev1", 00:12:21.344 "aliases": [ 00:12:21.344 "5c3d0879-56ff-432b-8154-2dcc5293104a" 00:12:21.344 ], 00:12:21.344 "product_name": "Malloc disk", 00:12:21.344 "block_size": 512, 00:12:21.344 "num_blocks": 65536, 00:12:21.344 "uuid": "5c3d0879-56ff-432b-8154-2dcc5293104a", 00:12:21.344 "assigned_rate_limits": { 00:12:21.344 "rw_ios_per_sec": 0, 00:12:21.344 "rw_mbytes_per_sec": 0, 00:12:21.344 "r_mbytes_per_sec": 0, 00:12:21.344 "w_mbytes_per_sec": 0 00:12:21.344 }, 00:12:21.344 "claimed": false, 00:12:21.344 "zoned": false, 00:12:21.344 "supported_io_types": { 00:12:21.344 "read": true, 00:12:21.344 "write": true, 00:12:21.344 "unmap": true, 00:12:21.344 "write_zeroes": true, 00:12:21.344 "flush": true, 00:12:21.344 "reset": true, 00:12:21.344 "compare": false, 00:12:21.344 "compare_and_write": false, 00:12:21.344 "abort": true, 00:12:21.344 "nvme_admin": false, 00:12:21.344 "nvme_io": false 00:12:21.344 }, 00:12:21.344 "memory_domains": [ 00:12:21.344 { 00:12:21.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.344 "dma_device_type": 2 00:12:21.344 } 00:12:21.344 ], 00:12:21.344 "driver_specific": {} 00:12:21.344 } 00:12:21.344 ] 00:12:21.344 07:58:27 -- common/autotest_common.sh@895 -- # return 0 00:12:21.344 07:58:27 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:21.344 [2024-07-13 07:58:27.143346] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.344 [2024-07-13 07:58:27.144658] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:21.344 [2024-07-13 07:58:27.144706] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:21.344 [2024-07-13 07:58:27.144716] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:21.344 [2024-07-13 07:58:27.144737] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:21.344 07:58:27 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:12:21.344 07:58:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:21.344 07:58:27 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:21.344 07:58:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:21.344 07:58:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:21.344 07:58:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:21.344 07:58:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:21.344 07:58:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:21.344 07:58:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:21.344 07:58:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:21.344 07:58:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:21.344 07:58:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:21.603 07:58:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:21.603 07:58:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.603 07:58:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:21.603 "name": "Existed_Raid", 00:12:21.603 "uuid": "6e654fcf-df48-4b6d-ae0b-8a3bfcf10f04", 00:12:21.603 "strip_size_kb": 64, 00:12:21.603 "state": "configuring", 00:12:21.603 "raid_level": "concat", 00:12:21.603 "superblock": true, 00:12:21.603 "num_base_bdevs": 3, 00:12:21.603 "num_base_bdevs_discovered": 1, 00:12:21.604 "num_base_bdevs_operational": 3, 00:12:21.604 "base_bdevs_list": [ 00:12:21.604 { 00:12:21.604 "name": "BaseBdev1", 00:12:21.604 "uuid": "5c3d0879-56ff-432b-8154-2dcc5293104a", 00:12:21.604 "is_configured": true, 00:12:21.604 "data_offset": 2048, 00:12:21.604 "data_size": 63488 00:12:21.604 }, 00:12:21.604 { 00:12:21.604 "name": "BaseBdev2", 00:12:21.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.604 "is_configured": false, 00:12:21.604 "data_offset": 0, 00:12:21.604 "data_size": 0 00:12:21.604 }, 00:12:21.604 { 00:12:21.604 "name": "BaseBdev3", 00:12:21.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.604 "is_configured": false, 00:12:21.604 "data_offset": 0, 00:12:21.604 "data_size": 0 00:12:21.604 } 00:12:21.604 ] 00:12:21.604 }' 00:12:21.604 07:58:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:21.604 07:58:27 -- common/autotest_common.sh@10 -- # set +x 00:12:22.171 07:58:27 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:22.429 [2024-07-13 07:58:28.122974] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:22.429 BaseBdev2 00:12:22.429 07:58:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:12:22.429 07:58:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:12:22.429 07:58:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:22.429 07:58:28 -- common/autotest_common.sh@889 -- # local i 00:12:22.429 07:58:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:22.429 07:58:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:22.429 07:58:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:22.688 07:58:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:22.688 [ 00:12:22.688 { 00:12:22.688 "name": "BaseBdev2", 00:12:22.688 "aliases": [ 00:12:22.688 "9cc9d832-3c8e-4ae2-9f7b-46b2409dd6d3" 00:12:22.688 ], 00:12:22.688 "product_name": "Malloc disk", 00:12:22.688 "block_size": 512, 00:12:22.688 "num_blocks": 65536, 00:12:22.688 "uuid": "9cc9d832-3c8e-4ae2-9f7b-46b2409dd6d3", 00:12:22.688 "assigned_rate_limits": { 00:12:22.688 "rw_ios_per_sec": 0, 00:12:22.688 "rw_mbytes_per_sec": 0, 00:12:22.688 "r_mbytes_per_sec": 0, 00:12:22.688 "w_mbytes_per_sec": 0 00:12:22.688 }, 00:12:22.688 "claimed": true, 00:12:22.688 "claim_type": "exclusive_write", 00:12:22.688 "zoned": false, 00:12:22.688 "supported_io_types": { 00:12:22.688 "read": true, 00:12:22.688 "write": true, 00:12:22.688 "unmap": true, 00:12:22.688 "write_zeroes": true, 00:12:22.688 "flush": true, 00:12:22.688 "reset": true, 00:12:22.688 "compare": false, 00:12:22.688 "compare_and_write": false, 00:12:22.688 "abort": true, 00:12:22.688 "nvme_admin": false, 00:12:22.688 "nvme_io": false 00:12:22.688 }, 00:12:22.688 "memory_domains": [ 00:12:22.688 { 00:12:22.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.688 "dma_device_type": 2 00:12:22.688 } 00:12:22.688 ], 00:12:22.688 "driver_specific": {} 00:12:22.688 } 00:12:22.688 ] 00:12:22.688 07:58:28 -- common/autotest_common.sh@895 -- # return 0 00:12:22.688 07:58:28 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:22.688 07:58:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:22.688 07:58:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:22.688 07:58:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:22.688 07:58:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:22.688 07:58:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:22.688 07:58:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:22.688 07:58:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:22.688 07:58:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:22.688 07:58:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:22.688 07:58:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:22.688 07:58:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:22.688 07:58:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:22.688 07:58:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.947 07:58:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:22.947 "name": "Existed_Raid", 00:12:22.947 "uuid": "6e654fcf-df48-4b6d-ae0b-8a3bfcf10f04", 00:12:22.947 "strip_size_kb": 64, 00:12:22.947 "state": "configuring", 00:12:22.947 "raid_level": "concat", 00:12:22.947 "superblock": true, 00:12:22.947 "num_base_bdevs": 3, 00:12:22.947 "num_base_bdevs_discovered": 2, 00:12:22.947 "num_base_bdevs_operational": 3, 00:12:22.947 "base_bdevs_list": [ 00:12:22.947 { 00:12:22.947 "name": "BaseBdev1", 00:12:22.947 "uuid": "5c3d0879-56ff-432b-8154-2dcc5293104a", 00:12:22.947 "is_configured": true, 00:12:22.947 "data_offset": 2048, 00:12:22.947 "data_size": 63488 00:12:22.947 }, 00:12:22.947 { 00:12:22.947 "name": "BaseBdev2", 00:12:22.947 "uuid": "9cc9d832-3c8e-4ae2-9f7b-46b2409dd6d3", 00:12:22.947 "is_configured": true, 00:12:22.947 "data_offset": 2048, 00:12:22.947 "data_size": 63488 00:12:22.947 }, 00:12:22.947 { 00:12:22.947 "name": "BaseBdev3", 00:12:22.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.947 "is_configured": false, 00:12:22.947 "data_offset": 0, 00:12:22.947 "data_size": 0 00:12:22.947 } 00:12:22.947 ] 00:12:22.947 }' 00:12:22.947 07:58:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:22.947 07:58:28 -- common/autotest_common.sh@10 -- # set +x 00:12:23.514 07:58:29 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:23.772 BaseBdev3 00:12:23.772 [2024-07-13 07:58:29.346795] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:23.772 [2024-07-13 07:58:29.346918] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000027c80 00:12:23.772 [2024-07-13 07:58:29.346930] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:23.772 [2024-07-13 07:58:29.346994] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:12:23.772 [2024-07-13 07:58:29.347201] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000027c80 00:12:23.772 [2024-07-13 07:58:29.347212] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000027c80 00:12:23.772 [2024-07-13 07:58:29.347268] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.772 07:58:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:12:23.772 07:58:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:12:23.772 07:58:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:23.772 07:58:29 -- common/autotest_common.sh@889 -- # local i 00:12:23.772 07:58:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:23.772 07:58:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:23.772 07:58:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:23.772 07:58:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:24.032 [ 00:12:24.032 { 00:12:24.032 "name": "BaseBdev3", 00:12:24.032 "aliases": [ 00:12:24.032 "dd4f144a-1bae-4cd7-b975-5e6a92bf18c4" 00:12:24.032 ], 00:12:24.032 "product_name": "Malloc disk", 00:12:24.032 "block_size": 512, 00:12:24.032 "num_blocks": 65536, 00:12:24.032 "uuid": "dd4f144a-1bae-4cd7-b975-5e6a92bf18c4", 00:12:24.032 "assigned_rate_limits": { 00:12:24.032 "rw_ios_per_sec": 0, 00:12:24.032 "rw_mbytes_per_sec": 0, 00:12:24.032 "r_mbytes_per_sec": 0, 00:12:24.032 "w_mbytes_per_sec": 0 00:12:24.032 }, 00:12:24.032 "claimed": true, 00:12:24.032 "claim_type": "exclusive_write", 00:12:24.032 "zoned": false, 00:12:24.032 "supported_io_types": { 00:12:24.032 "read": true, 00:12:24.032 "write": true, 00:12:24.032 "unmap": true, 00:12:24.032 "write_zeroes": true, 00:12:24.032 "flush": true, 00:12:24.032 "reset": true, 00:12:24.032 "compare": false, 00:12:24.032 "compare_and_write": false, 00:12:24.032 "abort": true, 00:12:24.032 "nvme_admin": false, 00:12:24.032 "nvme_io": false 00:12:24.032 }, 00:12:24.032 "memory_domains": [ 00:12:24.032 { 00:12:24.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.032 "dma_device_type": 2 00:12:24.032 } 00:12:24.032 ], 00:12:24.032 "driver_specific": {} 00:12:24.032 } 00:12:24.032 ] 00:12:24.032 07:58:29 -- common/autotest_common.sh@895 -- # return 0 00:12:24.032 07:58:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:24.032 07:58:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:24.032 07:58:29 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:24.032 07:58:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:24.032 07:58:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:24.032 07:58:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:24.032 07:58:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:24.032 07:58:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:24.032 07:58:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:24.032 07:58:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:24.032 07:58:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:24.032 07:58:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:24.032 07:58:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.032 07:58:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.291 07:58:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:24.291 "name": "Existed_Raid", 00:12:24.291 "uuid": "6e654fcf-df48-4b6d-ae0b-8a3bfcf10f04", 00:12:24.291 "strip_size_kb": 64, 00:12:24.291 "state": "online", 00:12:24.291 "raid_level": "concat", 00:12:24.291 "superblock": true, 00:12:24.291 "num_base_bdevs": 3, 00:12:24.291 "num_base_bdevs_discovered": 3, 00:12:24.291 "num_base_bdevs_operational": 3, 00:12:24.291 "base_bdevs_list": [ 00:12:24.291 { 00:12:24.291 "name": "BaseBdev1", 00:12:24.291 "uuid": "5c3d0879-56ff-432b-8154-2dcc5293104a", 00:12:24.291 "is_configured": true, 00:12:24.291 "data_offset": 2048, 00:12:24.291 "data_size": 63488 00:12:24.291 }, 00:12:24.291 { 00:12:24.291 "name": "BaseBdev2", 00:12:24.291 "uuid": "9cc9d832-3c8e-4ae2-9f7b-46b2409dd6d3", 00:12:24.291 "is_configured": true, 00:12:24.291 "data_offset": 2048, 00:12:24.291 "data_size": 63488 00:12:24.291 }, 00:12:24.291 { 00:12:24.291 "name": "BaseBdev3", 00:12:24.291 "uuid": "dd4f144a-1bae-4cd7-b975-5e6a92bf18c4", 00:12:24.291 "is_configured": true, 00:12:24.291 "data_offset": 2048, 00:12:24.291 "data_size": 63488 00:12:24.291 } 00:12:24.291 ] 00:12:24.291 }' 00:12:24.291 07:58:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:24.291 07:58:29 -- common/autotest_common.sh@10 -- # set +x 00:12:24.859 07:58:30 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:24.859 [2024-07-13 07:58:30.611245] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:24.859 [2024-07-13 07:58:30.611284] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:24.859 [2024-07-13 07:58:30.611331] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.859 07:58:30 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:12:24.859 07:58:30 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:12:24.859 07:58:30 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:12:24.859 07:58:30 -- bdev/bdev_raid.sh@197 -- # return 1 00:12:24.859 07:58:30 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:12:24.859 07:58:30 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:24.859 07:58:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:24.859 07:58:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:12:24.859 07:58:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:24.859 07:58:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:24.859 07:58:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:24.859 07:58:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:24.859 07:58:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:24.859 07:58:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:24.859 07:58:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:24.859 07:58:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.859 07:58:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.118 07:58:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:25.118 "name": "Existed_Raid", 00:12:25.118 "uuid": "6e654fcf-df48-4b6d-ae0b-8a3bfcf10f04", 00:12:25.118 "strip_size_kb": 64, 00:12:25.118 "state": "offline", 00:12:25.118 "raid_level": "concat", 00:12:25.118 "superblock": true, 00:12:25.118 "num_base_bdevs": 3, 00:12:25.118 "num_base_bdevs_discovered": 2, 00:12:25.118 "num_base_bdevs_operational": 2, 00:12:25.118 "base_bdevs_list": [ 00:12:25.118 { 00:12:25.118 "name": null, 00:12:25.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.118 "is_configured": false, 00:12:25.118 "data_offset": 2048, 00:12:25.118 "data_size": 63488 00:12:25.118 }, 00:12:25.118 { 00:12:25.118 "name": "BaseBdev2", 00:12:25.118 "uuid": "9cc9d832-3c8e-4ae2-9f7b-46b2409dd6d3", 00:12:25.118 "is_configured": true, 00:12:25.118 "data_offset": 2048, 00:12:25.118 "data_size": 63488 00:12:25.118 }, 00:12:25.118 { 00:12:25.118 "name": "BaseBdev3", 00:12:25.118 "uuid": "dd4f144a-1bae-4cd7-b975-5e6a92bf18c4", 00:12:25.118 "is_configured": true, 00:12:25.118 "data_offset": 2048, 00:12:25.118 "data_size": 63488 00:12:25.118 } 00:12:25.118 ] 00:12:25.118 }' 00:12:25.118 07:58:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:25.118 07:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:25.684 07:58:31 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:12:25.684 07:58:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:25.684 07:58:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:25.684 07:58:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:25.684 07:58:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:25.684 07:58:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:25.684 07:58:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:25.943 [2024-07-13 07:58:31.616828] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:25.943 07:58:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:25.943 07:58:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:25.943 07:58:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:25.943 07:58:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:26.202 07:58:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:26.202 07:58:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:26.202 07:58:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:26.202 [2024-07-13 07:58:31.927354] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:26.202 [2024-07-13 07:58:31.927398] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027c80 name Existed_Raid, state offline 00:12:26.202 07:58:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:26.202 07:58:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:26.202 07:58:31 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:12:26.202 07:58:31 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:26.459 07:58:32 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:12:26.459 07:58:32 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:12:26.459 07:58:32 -- bdev/bdev_raid.sh@287 -- # killprocess 61503 00:12:26.459 07:58:32 -- common/autotest_common.sh@926 -- # '[' -z 61503 ']' 00:12:26.459 07:58:32 -- common/autotest_common.sh@930 -- # kill -0 61503 00:12:26.459 07:58:32 -- common/autotest_common.sh@931 -- # uname 00:12:26.459 07:58:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:26.459 07:58:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61503 00:12:26.459 killing process with pid 61503 00:12:26.459 07:58:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:26.459 07:58:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:26.459 07:58:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61503' 00:12:26.459 07:58:32 -- common/autotest_common.sh@945 -- # kill 61503 00:12:26.459 07:58:32 -- common/autotest_common.sh@950 -- # wait 61503 00:12:26.459 [2024-07-13 07:58:32.127054] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:26.459 [2024-07-13 07:58:32.127107] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:26.716 07:58:32 -- bdev/bdev_raid.sh@289 -- # return 0 00:12:26.716 00:12:26.716 real 0m9.391s 00:12:26.716 user 0m17.059s 00:12:26.716 sys 0m1.296s 00:12:26.716 07:58:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:26.716 07:58:32 -- common/autotest_common.sh@10 -- # set +x 00:12:26.716 ************************************ 00:12:26.716 END TEST raid_state_function_test_sb 00:12:26.716 ************************************ 00:12:26.717 07:58:32 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:12:26.717 07:58:32 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:26.717 07:58:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:26.717 07:58:32 -- common/autotest_common.sh@10 -- # set +x 00:12:26.717 ************************************ 00:12:26.717 START TEST raid_superblock_test 00:12:26.717 ************************************ 00:12:26.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:26.717 07:58:32 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 3 00:12:26.717 07:58:32 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:12:26.717 07:58:32 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:12:26.717 07:58:32 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:12:26.717 07:58:32 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:12:26.717 07:58:32 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:12:26.717 07:58:32 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:12:26.717 07:58:32 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:12:26.717 07:58:32 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:12:26.717 07:58:32 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:12:26.717 07:58:32 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:12:26.717 07:58:32 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:12:26.717 07:58:32 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:12:26.717 07:58:32 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:12:26.717 07:58:32 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:12:26.717 07:58:32 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:12:26.717 07:58:32 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:12:26.717 07:58:32 -- bdev/bdev_raid.sh@357 -- # raid_pid=61860 00:12:26.717 07:58:32 -- bdev/bdev_raid.sh@358 -- # waitforlisten 61860 /var/tmp/spdk-raid.sock 00:12:26.717 07:58:32 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:26.717 07:58:32 -- common/autotest_common.sh@819 -- # '[' -z 61860 ']' 00:12:26.717 07:58:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:26.717 07:58:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:26.717 07:58:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:26.717 07:58:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:26.717 07:58:32 -- common/autotest_common.sh@10 -- # set +x 00:12:26.717 [2024-07-13 07:58:32.513393] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:26.717 [2024-07-13 07:58:32.513644] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61860 ] 00:12:26.975 [2024-07-13 07:58:32.668862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.975 [2024-07-13 07:58:32.720124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.975 [2024-07-13 07:58:32.769737] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.539 07:58:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:27.539 07:58:33 -- common/autotest_common.sh@852 -- # return 0 00:12:27.539 07:58:33 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:12:27.539 07:58:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:27.539 07:58:33 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:12:27.539 07:58:33 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:12:27.539 07:58:33 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:27.539 07:58:33 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:27.539 07:58:33 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:12:27.539 07:58:33 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:27.539 07:58:33 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:27.797 malloc1 00:12:27.797 07:58:33 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:27.797 [2024-07-13 07:58:33.604545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:27.797 [2024-07-13 07:58:33.604640] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.797 [2024-07-13 07:58:33.604701] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000026180 00:12:27.797 [2024-07-13 07:58:33.604740] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.055 pt1 00:12:28.055 [2024-07-13 07:58:33.606418] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.055 [2024-07-13 07:58:33.606471] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:28.055 07:58:33 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:12:28.055 07:58:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:28.055 07:58:33 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:12:28.055 07:58:33 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:12:28.055 07:58:33 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:28.055 07:58:33 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:28.055 07:58:33 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:12:28.055 07:58:33 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:28.055 07:58:33 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:28.055 malloc2 00:12:28.055 07:58:33 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:28.313 [2024-07-13 07:58:33.897374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:28.313 [2024-07-13 07:58:33.897432] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.313 [2024-07-13 07:58:33.897645] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027f80 00:12:28.313 [2024-07-13 07:58:33.897689] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.313 [2024-07-13 07:58:33.898917] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.313 [2024-07-13 07:58:33.898949] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:28.313 pt2 00:12:28.313 07:58:33 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:12:28.313 07:58:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:28.314 07:58:33 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:12:28.314 07:58:33 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:12:28.314 07:58:33 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:28.314 07:58:33 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:28.314 07:58:33 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:12:28.314 07:58:33 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:28.314 07:58:33 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:12:28.314 malloc3 00:12:28.314 07:58:34 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:28.571 [2024-07-13 07:58:34.230188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:28.571 [2024-07-13 07:58:34.230259] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.571 [2024-07-13 07:58:34.230304] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029d80 00:12:28.571 [2024-07-13 07:58:34.230342] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.571 [2024-07-13 07:58:34.233516] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.571 [2024-07-13 07:58:34.233628] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:28.571 pt3 00:12:28.571 07:58:34 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:12:28.571 07:58:34 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:28.571 07:58:34 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:12:28.829 [2024-07-13 07:58:34.450436] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:28.829 [2024-07-13 07:58:34.451970] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:28.829 [2024-07-13 07:58:34.452012] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:28.829 [2024-07-13 07:58:34.452115] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002b280 00:12:28.829 [2024-07-13 07:58:34.452125] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:28.829 [2024-07-13 07:58:34.452209] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:12:28.829 [2024-07-13 07:58:34.452383] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002b280 00:12:28.829 [2024-07-13 07:58:34.452392] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002b280 00:12:28.829 [2024-07-13 07:58:34.452453] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.829 07:58:34 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:28.829 07:58:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:28.829 07:58:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:28.829 07:58:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:28.829 07:58:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:28.829 07:58:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:28.829 07:58:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:28.829 07:58:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:28.829 07:58:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:28.829 07:58:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:28.829 07:58:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:28.829 07:58:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.829 07:58:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:28.829 "name": "raid_bdev1", 00:12:28.829 "uuid": "bdcb5186-722d-4878-9fda-25a8d70ab40e", 00:12:28.829 "strip_size_kb": 64, 00:12:28.829 "state": "online", 00:12:28.829 "raid_level": "concat", 00:12:28.829 "superblock": true, 00:12:28.829 "num_base_bdevs": 3, 00:12:28.829 "num_base_bdevs_discovered": 3, 00:12:28.829 "num_base_bdevs_operational": 3, 00:12:28.829 "base_bdevs_list": [ 00:12:28.829 { 00:12:28.829 "name": "pt1", 00:12:28.829 "uuid": "45b6079f-67cb-509d-9765-bd85d0c51f27", 00:12:28.829 "is_configured": true, 00:12:28.829 "data_offset": 2048, 00:12:28.829 "data_size": 63488 00:12:28.829 }, 00:12:28.829 { 00:12:28.829 "name": "pt2", 00:12:28.829 "uuid": "9668396f-a6a4-5e77-95b1-3c7b20fef4c4", 00:12:28.829 "is_configured": true, 00:12:28.829 "data_offset": 2048, 00:12:28.829 "data_size": 63488 00:12:28.829 }, 00:12:28.829 { 00:12:28.829 "name": "pt3", 00:12:28.829 "uuid": "b08318df-f1a8-570a-a0f5-9f1f77c3a4f3", 00:12:28.829 "is_configured": true, 00:12:28.829 "data_offset": 2048, 00:12:28.829 "data_size": 63488 00:12:28.829 } 00:12:28.829 ] 00:12:28.829 }' 00:12:28.829 07:58:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:28.830 07:58:34 -- common/autotest_common.sh@10 -- # set +x 00:12:29.395 07:58:35 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:29.395 07:58:35 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:12:29.653 [2024-07-13 07:58:35.306587] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:29.653 07:58:35 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=bdcb5186-722d-4878-9fda-25a8d70ab40e 00:12:29.653 07:58:35 -- bdev/bdev_raid.sh@380 -- # '[' -z bdcb5186-722d-4878-9fda-25a8d70ab40e ']' 00:12:29.653 07:58:35 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:29.653 [2024-07-13 07:58:35.454452] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.653 [2024-07-13 07:58:35.454500] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.653 [2024-07-13 07:58:35.454563] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.653 [2024-07-13 07:58:35.454602] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.653 [2024-07-13 07:58:35.454611] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002b280 name raid_bdev1, state offline 00:12:29.911 07:58:35 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:12:29.911 07:58:35 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:29.911 07:58:35 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:12:29.911 07:58:35 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:12:29.911 07:58:35 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:12:29.911 07:58:35 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:30.169 07:58:35 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:12:30.169 07:58:35 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:30.169 07:58:35 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:12:30.169 07:58:35 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:30.426 07:58:36 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:30.426 07:58:36 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:30.685 07:58:36 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:12:30.685 07:58:36 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:30.685 07:58:36 -- common/autotest_common.sh@640 -- # local es=0 00:12:30.685 07:58:36 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:30.685 07:58:36 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:30.685 07:58:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:30.685 07:58:36 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:30.685 07:58:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:30.685 07:58:36 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:30.685 07:58:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:30.685 07:58:36 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:30.685 07:58:36 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:30.685 07:58:36 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:30.943 [2024-07-13 07:58:36.510585] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:30.943 [2024-07-13 07:58:36.511903] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:30.943 [2024-07-13 07:58:36.511934] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:30.943 [2024-07-13 07:58:36.511961] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:12:30.943 [2024-07-13 07:58:36.512017] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:12:30.943 [2024-07-13 07:58:36.512040] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:12:30.943 [2024-07-13 07:58:36.512077] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.943 [2024-07-13 07:58:36.512088] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002b880 name raid_bdev1, state configuring 00:12:30.943 request: 00:12:30.943 { 00:12:30.943 "name": "raid_bdev1", 00:12:30.943 "raid_level": "concat", 00:12:30.943 "base_bdevs": [ 00:12:30.943 "malloc1", 00:12:30.943 "malloc2", 00:12:30.943 "malloc3" 00:12:30.943 ], 00:12:30.943 "superblock": false, 00:12:30.943 "strip_size_kb": 64, 00:12:30.943 "method": "bdev_raid_create", 00:12:30.943 "req_id": 1 00:12:30.943 } 00:12:30.943 Got JSON-RPC error response 00:12:30.943 response: 00:12:30.943 { 00:12:30.943 "code": -17, 00:12:30.943 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:30.943 } 00:12:30.943 07:58:36 -- common/autotest_common.sh@643 -- # es=1 00:12:30.943 07:58:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:30.943 07:58:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:30.943 07:58:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:30.943 07:58:36 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:30.943 07:58:36 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:12:30.943 07:58:36 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:12:30.943 07:58:36 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:12:30.943 07:58:36 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:31.201 [2024-07-13 07:58:36.874594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:31.201 [2024-07-13 07:58:36.874663] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.201 [2024-07-13 07:58:36.874705] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ca80 00:12:31.201 [2024-07-13 07:58:36.874731] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.201 [2024-07-13 07:58:36.876318] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.201 [2024-07-13 07:58:36.876356] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:31.201 [2024-07-13 07:58:36.876422] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:12:31.201 [2024-07-13 07:58:36.876490] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:31.201 pt1 00:12:31.201 07:58:36 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:12:31.201 07:58:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:31.201 07:58:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:31.201 07:58:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:31.201 07:58:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:31.201 07:58:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:31.201 07:58:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:31.201 07:58:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:31.201 07:58:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:31.201 07:58:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:31.201 07:58:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.201 07:58:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:31.460 07:58:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:31.460 "name": "raid_bdev1", 00:12:31.460 "uuid": "bdcb5186-722d-4878-9fda-25a8d70ab40e", 00:12:31.460 "strip_size_kb": 64, 00:12:31.460 "state": "configuring", 00:12:31.460 "raid_level": "concat", 00:12:31.460 "superblock": true, 00:12:31.460 "num_base_bdevs": 3, 00:12:31.460 "num_base_bdevs_discovered": 1, 00:12:31.460 "num_base_bdevs_operational": 3, 00:12:31.460 "base_bdevs_list": [ 00:12:31.460 { 00:12:31.460 "name": "pt1", 00:12:31.460 "uuid": "45b6079f-67cb-509d-9765-bd85d0c51f27", 00:12:31.460 "is_configured": true, 00:12:31.460 "data_offset": 2048, 00:12:31.460 "data_size": 63488 00:12:31.460 }, 00:12:31.460 { 00:12:31.460 "name": null, 00:12:31.460 "uuid": "9668396f-a6a4-5e77-95b1-3c7b20fef4c4", 00:12:31.460 "is_configured": false, 00:12:31.460 "data_offset": 2048, 00:12:31.460 "data_size": 63488 00:12:31.460 }, 00:12:31.460 { 00:12:31.460 "name": null, 00:12:31.460 "uuid": "b08318df-f1a8-570a-a0f5-9f1f77c3a4f3", 00:12:31.460 "is_configured": false, 00:12:31.460 "data_offset": 2048, 00:12:31.460 "data_size": 63488 00:12:31.460 } 00:12:31.460 ] 00:12:31.460 }' 00:12:31.460 07:58:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:31.460 07:58:37 -- common/autotest_common.sh@10 -- # set +x 00:12:32.053 07:58:37 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:12:32.053 07:58:37 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:32.054 [2024-07-13 07:58:37.734688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:32.054 [2024-07-13 07:58:37.734757] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.054 [2024-07-13 07:58:37.734802] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002e580 00:12:32.054 [2024-07-13 07:58:37.734822] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.054 [2024-07-13 07:58:37.735069] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.054 [2024-07-13 07:58:37.735100] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:32.054 [2024-07-13 07:58:37.735163] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:12:32.054 [2024-07-13 07:58:37.735181] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:32.054 pt2 00:12:32.054 07:58:37 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:32.313 [2024-07-13 07:58:37.878733] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:32.313 07:58:37 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:12:32.313 07:58:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:32.313 07:58:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:32.313 07:58:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:32.313 07:58:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:32.313 07:58:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:32.313 07:58:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:32.313 07:58:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:32.313 07:58:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:32.313 07:58:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:32.313 07:58:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.313 07:58:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:32.313 07:58:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:32.313 "name": "raid_bdev1", 00:12:32.313 "uuid": "bdcb5186-722d-4878-9fda-25a8d70ab40e", 00:12:32.313 "strip_size_kb": 64, 00:12:32.313 "state": "configuring", 00:12:32.313 "raid_level": "concat", 00:12:32.313 "superblock": true, 00:12:32.313 "num_base_bdevs": 3, 00:12:32.313 "num_base_bdevs_discovered": 1, 00:12:32.313 "num_base_bdevs_operational": 3, 00:12:32.313 "base_bdevs_list": [ 00:12:32.313 { 00:12:32.313 "name": "pt1", 00:12:32.313 "uuid": "45b6079f-67cb-509d-9765-bd85d0c51f27", 00:12:32.313 "is_configured": true, 00:12:32.313 "data_offset": 2048, 00:12:32.313 "data_size": 63488 00:12:32.313 }, 00:12:32.313 { 00:12:32.313 "name": null, 00:12:32.313 "uuid": "9668396f-a6a4-5e77-95b1-3c7b20fef4c4", 00:12:32.313 "is_configured": false, 00:12:32.313 "data_offset": 2048, 00:12:32.313 "data_size": 63488 00:12:32.313 }, 00:12:32.313 { 00:12:32.313 "name": null, 00:12:32.313 "uuid": "b08318df-f1a8-570a-a0f5-9f1f77c3a4f3", 00:12:32.313 "is_configured": false, 00:12:32.313 "data_offset": 2048, 00:12:32.313 "data_size": 63488 00:12:32.313 } 00:12:32.313 ] 00:12:32.313 }' 00:12:32.313 07:58:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:32.313 07:58:38 -- common/autotest_common.sh@10 -- # set +x 00:12:32.881 07:58:38 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:12:32.881 07:58:38 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:12:32.881 07:58:38 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:33.138 [2024-07-13 07:58:38.734781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:33.138 [2024-07-13 07:58:38.734854] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.138 [2024-07-13 07:58:38.734925] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002fa80 00:12:33.138 [2024-07-13 07:58:38.734950] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.138 [2024-07-13 07:58:38.735252] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.138 [2024-07-13 07:58:38.735288] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:33.139 [2024-07-13 07:58:38.735351] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:12:33.139 [2024-07-13 07:58:38.735370] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:33.139 pt2 00:12:33.139 07:58:38 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:12:33.139 07:58:38 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:12:33.139 07:58:38 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:33.395 [2024-07-13 07:58:38.954865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:33.395 [2024-07-13 07:58:38.954944] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.395 [2024-07-13 07:58:38.954979] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031280 00:12:33.395 [2024-07-13 07:58:38.955003] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.395 [2024-07-13 07:58:38.955253] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.395 [2024-07-13 07:58:38.955284] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:33.395 [2024-07-13 07:58:38.955346] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:12:33.395 [2024-07-13 07:58:38.955370] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:33.395 [2024-07-13 07:58:38.955429] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002df80 00:12:33.395 [2024-07-13 07:58:38.955437] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:33.395 [2024-07-13 07:58:38.955701] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:33.395 [2024-07-13 07:58:38.955887] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002df80 00:12:33.395 [2024-07-13 07:58:38.955898] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002df80 00:12:33.395 [2024-07-13 07:58:38.955958] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.395 pt3 00:12:33.395 07:58:38 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:12:33.395 07:58:38 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:12:33.395 07:58:38 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:33.395 07:58:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:33.395 07:58:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:33.395 07:58:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:33.395 07:58:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:33.395 07:58:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:33.395 07:58:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:33.395 07:58:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:33.396 07:58:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:33.396 07:58:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:33.396 07:58:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.396 07:58:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:33.396 07:58:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:33.396 "name": "raid_bdev1", 00:12:33.396 "uuid": "bdcb5186-722d-4878-9fda-25a8d70ab40e", 00:12:33.396 "strip_size_kb": 64, 00:12:33.396 "state": "online", 00:12:33.396 "raid_level": "concat", 00:12:33.396 "superblock": true, 00:12:33.396 "num_base_bdevs": 3, 00:12:33.396 "num_base_bdevs_discovered": 3, 00:12:33.396 "num_base_bdevs_operational": 3, 00:12:33.396 "base_bdevs_list": [ 00:12:33.396 { 00:12:33.396 "name": "pt1", 00:12:33.396 "uuid": "45b6079f-67cb-509d-9765-bd85d0c51f27", 00:12:33.396 "is_configured": true, 00:12:33.396 "data_offset": 2048, 00:12:33.396 "data_size": 63488 00:12:33.396 }, 00:12:33.396 { 00:12:33.396 "name": "pt2", 00:12:33.396 "uuid": "9668396f-a6a4-5e77-95b1-3c7b20fef4c4", 00:12:33.396 "is_configured": true, 00:12:33.396 "data_offset": 2048, 00:12:33.396 "data_size": 63488 00:12:33.396 }, 00:12:33.396 { 00:12:33.396 "name": "pt3", 00:12:33.396 "uuid": "b08318df-f1a8-570a-a0f5-9f1f77c3a4f3", 00:12:33.396 "is_configured": true, 00:12:33.396 "data_offset": 2048, 00:12:33.396 "data_size": 63488 00:12:33.396 } 00:12:33.396 ] 00:12:33.396 }' 00:12:33.396 07:58:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:33.396 07:58:39 -- common/autotest_common.sh@10 -- # set +x 00:12:34.329 07:58:39 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:34.329 07:58:39 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:12:34.329 [2024-07-13 07:58:39.983067] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.329 07:58:39 -- bdev/bdev_raid.sh@430 -- # '[' bdcb5186-722d-4878-9fda-25a8d70ab40e '!=' bdcb5186-722d-4878-9fda-25a8d70ab40e ']' 00:12:34.329 07:58:39 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:12:34.329 07:58:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:12:34.329 07:58:40 -- bdev/bdev_raid.sh@197 -- # return 1 00:12:34.329 07:58:40 -- bdev/bdev_raid.sh@511 -- # killprocess 61860 00:12:34.329 07:58:40 -- common/autotest_common.sh@926 -- # '[' -z 61860 ']' 00:12:34.329 07:58:40 -- common/autotest_common.sh@930 -- # kill -0 61860 00:12:34.329 07:58:40 -- common/autotest_common.sh@931 -- # uname 00:12:34.329 07:58:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:34.329 07:58:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61860 00:12:34.329 killing process with pid 61860 00:12:34.329 07:58:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:34.329 07:58:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:34.329 07:58:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61860' 00:12:34.329 07:58:40 -- common/autotest_common.sh@945 -- # kill 61860 00:12:34.329 [2024-07-13 07:58:40.022057] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:34.329 07:58:40 -- common/autotest_common.sh@950 -- # wait 61860 00:12:34.329 [2024-07-13 07:58:40.022115] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.329 [2024-07-13 07:58:40.022148] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:34.329 [2024-07-13 07:58:40.022156] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002df80 name raid_bdev1, state offline 00:12:34.329 [2024-07-13 07:58:40.051794] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:34.586 07:58:40 -- bdev/bdev_raid.sh@513 -- # return 0 00:12:34.587 00:12:34.587 real 0m7.870s 00:12:34.587 user 0m14.233s 00:12:34.587 sys 0m1.050s 00:12:34.587 07:58:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:34.587 07:58:40 -- common/autotest_common.sh@10 -- # set +x 00:12:34.587 ************************************ 00:12:34.587 END TEST raid_superblock_test 00:12:34.587 ************************************ 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:12:34.587 07:58:40 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:34.587 07:58:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:34.587 07:58:40 -- common/autotest_common.sh@10 -- # set +x 00:12:34.587 ************************************ 00:12:34.587 START TEST raid_state_function_test 00:12:34.587 ************************************ 00:12:34.587 07:58:40 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 false 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:34.587 Process raid pid: 62143 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@226 -- # raid_pid=62143 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 62143' 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@228 -- # waitforlisten 62143 /var/tmp/spdk-raid.sock 00:12:34.587 07:58:40 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:34.587 07:58:40 -- common/autotest_common.sh@819 -- # '[' -z 62143 ']' 00:12:34.587 07:58:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:34.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:34.587 07:58:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:34.587 07:58:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:34.587 07:58:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:34.587 07:58:40 -- common/autotest_common.sh@10 -- # set +x 00:12:34.845 [2024-07-13 07:58:40.440234] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:34.845 [2024-07-13 07:58:40.440688] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.845 [2024-07-13 07:58:40.593828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.845 [2024-07-13 07:58:40.646312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.103 [2024-07-13 07:58:40.695652] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.668 07:58:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:35.668 07:58:41 -- common/autotest_common.sh@852 -- # return 0 00:12:35.668 07:58:41 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:35.668 [2024-07-13 07:58:41.308953] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:35.668 [2024-07-13 07:58:41.309044] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:35.668 [2024-07-13 07:58:41.309062] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:35.668 [2024-07-13 07:58:41.309091] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:35.668 [2024-07-13 07:58:41.309103] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:35.668 [2024-07-13 07:58:41.309153] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:35.668 07:58:41 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:35.668 07:58:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:35.668 07:58:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:35.668 07:58:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:35.669 07:58:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:35.669 07:58:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:35.669 07:58:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:35.669 07:58:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:35.669 07:58:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:35.669 07:58:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:35.669 07:58:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.669 07:58:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.927 07:58:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:35.927 "name": "Existed_Raid", 00:12:35.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.927 "strip_size_kb": 0, 00:12:35.927 "state": "configuring", 00:12:35.927 "raid_level": "raid1", 00:12:35.927 "superblock": false, 00:12:35.927 "num_base_bdevs": 3, 00:12:35.927 "num_base_bdevs_discovered": 0, 00:12:35.927 "num_base_bdevs_operational": 3, 00:12:35.927 "base_bdevs_list": [ 00:12:35.927 { 00:12:35.927 "name": "BaseBdev1", 00:12:35.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.927 "is_configured": false, 00:12:35.927 "data_offset": 0, 00:12:35.927 "data_size": 0 00:12:35.927 }, 00:12:35.927 { 00:12:35.927 "name": "BaseBdev2", 00:12:35.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.927 "is_configured": false, 00:12:35.927 "data_offset": 0, 00:12:35.927 "data_size": 0 00:12:35.927 }, 00:12:35.927 { 00:12:35.927 "name": "BaseBdev3", 00:12:35.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.927 "is_configured": false, 00:12:35.927 "data_offset": 0, 00:12:35.927 "data_size": 0 00:12:35.927 } 00:12:35.927 ] 00:12:35.927 }' 00:12:35.927 07:58:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:35.927 07:58:41 -- common/autotest_common.sh@10 -- # set +x 00:12:36.185 07:58:41 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:36.444 [2024-07-13 07:58:42.184918] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:36.444 [2024-07-13 07:58:42.184956] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000025880 name Existed_Raid, state configuring 00:12:36.444 07:58:42 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:36.703 [2024-07-13 07:58:42.336939] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:36.703 [2024-07-13 07:58:42.336991] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:36.703 [2024-07-13 07:58:42.337000] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:36.703 [2024-07-13 07:58:42.337017] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:36.703 [2024-07-13 07:58:42.337024] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:36.703 [2024-07-13 07:58:42.337046] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:36.703 07:58:42 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:36.703 BaseBdev1 00:12:36.703 [2024-07-13 07:58:42.490387] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:36.703 07:58:42 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:12:36.703 07:58:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:12:36.703 07:58:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:36.703 07:58:42 -- common/autotest_common.sh@889 -- # local i 00:12:36.703 07:58:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:36.703 07:58:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:36.703 07:58:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:36.963 07:58:42 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:37.223 [ 00:12:37.223 { 00:12:37.223 "name": "BaseBdev1", 00:12:37.223 "aliases": [ 00:12:37.223 "9d33912e-b4e5-4c4e-b37e-f402a795c7fd" 00:12:37.223 ], 00:12:37.223 "product_name": "Malloc disk", 00:12:37.223 "block_size": 512, 00:12:37.223 "num_blocks": 65536, 00:12:37.223 "uuid": "9d33912e-b4e5-4c4e-b37e-f402a795c7fd", 00:12:37.223 "assigned_rate_limits": { 00:12:37.223 "rw_ios_per_sec": 0, 00:12:37.223 "rw_mbytes_per_sec": 0, 00:12:37.223 "r_mbytes_per_sec": 0, 00:12:37.223 "w_mbytes_per_sec": 0 00:12:37.223 }, 00:12:37.223 "claimed": true, 00:12:37.223 "claim_type": "exclusive_write", 00:12:37.223 "zoned": false, 00:12:37.223 "supported_io_types": { 00:12:37.223 "read": true, 00:12:37.223 "write": true, 00:12:37.223 "unmap": true, 00:12:37.223 "write_zeroes": true, 00:12:37.223 "flush": true, 00:12:37.223 "reset": true, 00:12:37.223 "compare": false, 00:12:37.223 "compare_and_write": false, 00:12:37.223 "abort": true, 00:12:37.223 "nvme_admin": false, 00:12:37.223 "nvme_io": false 00:12:37.223 }, 00:12:37.223 "memory_domains": [ 00:12:37.223 { 00:12:37.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.223 "dma_device_type": 2 00:12:37.223 } 00:12:37.223 ], 00:12:37.223 "driver_specific": {} 00:12:37.223 } 00:12:37.223 ] 00:12:37.223 07:58:42 -- common/autotest_common.sh@895 -- # return 0 00:12:37.223 07:58:42 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:37.223 07:58:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:37.223 07:58:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:37.223 07:58:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:37.223 07:58:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:37.223 07:58:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:37.223 07:58:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:37.223 07:58:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:37.223 07:58:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:37.223 07:58:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:37.223 07:58:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:37.223 07:58:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.223 07:58:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:37.223 "name": "Existed_Raid", 00:12:37.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.223 "strip_size_kb": 0, 00:12:37.223 "state": "configuring", 00:12:37.223 "raid_level": "raid1", 00:12:37.223 "superblock": false, 00:12:37.223 "num_base_bdevs": 3, 00:12:37.223 "num_base_bdevs_discovered": 1, 00:12:37.223 "num_base_bdevs_operational": 3, 00:12:37.223 "base_bdevs_list": [ 00:12:37.223 { 00:12:37.223 "name": "BaseBdev1", 00:12:37.223 "uuid": "9d33912e-b4e5-4c4e-b37e-f402a795c7fd", 00:12:37.223 "is_configured": true, 00:12:37.223 "data_offset": 0, 00:12:37.223 "data_size": 65536 00:12:37.223 }, 00:12:37.223 { 00:12:37.223 "name": "BaseBdev2", 00:12:37.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.223 "is_configured": false, 00:12:37.223 "data_offset": 0, 00:12:37.223 "data_size": 0 00:12:37.223 }, 00:12:37.223 { 00:12:37.223 "name": "BaseBdev3", 00:12:37.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.223 "is_configured": false, 00:12:37.223 "data_offset": 0, 00:12:37.223 "data_size": 0 00:12:37.223 } 00:12:37.223 ] 00:12:37.223 }' 00:12:37.223 07:58:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:37.223 07:58:42 -- common/autotest_common.sh@10 -- # set +x 00:12:37.789 07:58:43 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:38.047 [2024-07-13 07:58:43.750611] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:38.047 [2024-07-13 07:58:43.750668] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026480 name Existed_Raid, state configuring 00:12:38.047 07:58:43 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:12:38.047 07:58:43 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:38.306 [2024-07-13 07:58:43.946649] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.306 [2024-07-13 07:58:43.948156] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:38.306 [2024-07-13 07:58:43.948211] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:38.306 [2024-07-13 07:58:43.948222] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:38.306 [2024-07-13 07:58:43.948248] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:38.306 07:58:43 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:12:38.306 07:58:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:38.306 07:58:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:38.306 07:58:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:38.306 07:58:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:38.306 07:58:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:38.306 07:58:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:38.306 07:58:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:38.306 07:58:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:38.306 07:58:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:38.306 07:58:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:38.306 07:58:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:38.306 07:58:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:38.306 07:58:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.565 07:58:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:38.565 "name": "Existed_Raid", 00:12:38.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.565 "strip_size_kb": 0, 00:12:38.565 "state": "configuring", 00:12:38.565 "raid_level": "raid1", 00:12:38.565 "superblock": false, 00:12:38.565 "num_base_bdevs": 3, 00:12:38.565 "num_base_bdevs_discovered": 1, 00:12:38.565 "num_base_bdevs_operational": 3, 00:12:38.565 "base_bdevs_list": [ 00:12:38.565 { 00:12:38.565 "name": "BaseBdev1", 00:12:38.565 "uuid": "9d33912e-b4e5-4c4e-b37e-f402a795c7fd", 00:12:38.565 "is_configured": true, 00:12:38.565 "data_offset": 0, 00:12:38.565 "data_size": 65536 00:12:38.565 }, 00:12:38.565 { 00:12:38.565 "name": "BaseBdev2", 00:12:38.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.565 "is_configured": false, 00:12:38.565 "data_offset": 0, 00:12:38.565 "data_size": 0 00:12:38.565 }, 00:12:38.565 { 00:12:38.565 "name": "BaseBdev3", 00:12:38.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.565 "is_configured": false, 00:12:38.565 "data_offset": 0, 00:12:38.565 "data_size": 0 00:12:38.565 } 00:12:38.565 ] 00:12:38.565 }' 00:12:38.565 07:58:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:38.565 07:58:44 -- common/autotest_common.sh@10 -- # set +x 00:12:39.132 07:58:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:39.391 BaseBdev2 00:12:39.391 [2024-07-13 07:58:45.017706] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:39.391 07:58:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:12:39.391 07:58:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:12:39.391 07:58:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:39.391 07:58:45 -- common/autotest_common.sh@889 -- # local i 00:12:39.391 07:58:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:39.391 07:58:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:39.391 07:58:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:39.391 07:58:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:39.650 [ 00:12:39.650 { 00:12:39.650 "name": "BaseBdev2", 00:12:39.650 "aliases": [ 00:12:39.650 "cfaccaa0-d708-4cba-91bb-5d115f8c93a5" 00:12:39.650 ], 00:12:39.650 "product_name": "Malloc disk", 00:12:39.650 "block_size": 512, 00:12:39.650 "num_blocks": 65536, 00:12:39.650 "uuid": "cfaccaa0-d708-4cba-91bb-5d115f8c93a5", 00:12:39.650 "assigned_rate_limits": { 00:12:39.650 "rw_ios_per_sec": 0, 00:12:39.650 "rw_mbytes_per_sec": 0, 00:12:39.650 "r_mbytes_per_sec": 0, 00:12:39.650 "w_mbytes_per_sec": 0 00:12:39.650 }, 00:12:39.650 "claimed": true, 00:12:39.650 "claim_type": "exclusive_write", 00:12:39.650 "zoned": false, 00:12:39.650 "supported_io_types": { 00:12:39.650 "read": true, 00:12:39.650 "write": true, 00:12:39.650 "unmap": true, 00:12:39.650 "write_zeroes": true, 00:12:39.650 "flush": true, 00:12:39.650 "reset": true, 00:12:39.650 "compare": false, 00:12:39.650 "compare_and_write": false, 00:12:39.650 "abort": true, 00:12:39.650 "nvme_admin": false, 00:12:39.650 "nvme_io": false 00:12:39.650 }, 00:12:39.650 "memory_domains": [ 00:12:39.650 { 00:12:39.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.650 "dma_device_type": 2 00:12:39.650 } 00:12:39.650 ], 00:12:39.650 "driver_specific": {} 00:12:39.650 } 00:12:39.650 ] 00:12:39.650 07:58:45 -- common/autotest_common.sh@895 -- # return 0 00:12:39.650 07:58:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:39.650 07:58:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:39.650 07:58:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:39.650 07:58:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:39.650 07:58:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:39.650 07:58:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:39.650 07:58:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:39.650 07:58:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:39.650 07:58:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:39.650 07:58:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:39.650 07:58:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:39.650 07:58:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:39.650 07:58:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.650 07:58:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.909 07:58:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:39.909 "name": "Existed_Raid", 00:12:39.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.909 "strip_size_kb": 0, 00:12:39.909 "state": "configuring", 00:12:39.909 "raid_level": "raid1", 00:12:39.909 "superblock": false, 00:12:39.909 "num_base_bdevs": 3, 00:12:39.909 "num_base_bdevs_discovered": 2, 00:12:39.909 "num_base_bdevs_operational": 3, 00:12:39.909 "base_bdevs_list": [ 00:12:39.909 { 00:12:39.909 "name": "BaseBdev1", 00:12:39.909 "uuid": "9d33912e-b4e5-4c4e-b37e-f402a795c7fd", 00:12:39.909 "is_configured": true, 00:12:39.909 "data_offset": 0, 00:12:39.909 "data_size": 65536 00:12:39.909 }, 00:12:39.909 { 00:12:39.909 "name": "BaseBdev2", 00:12:39.909 "uuid": "cfaccaa0-d708-4cba-91bb-5d115f8c93a5", 00:12:39.909 "is_configured": true, 00:12:39.909 "data_offset": 0, 00:12:39.909 "data_size": 65536 00:12:39.909 }, 00:12:39.909 { 00:12:39.909 "name": "BaseBdev3", 00:12:39.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.909 "is_configured": false, 00:12:39.909 "data_offset": 0, 00:12:39.909 "data_size": 0 00:12:39.909 } 00:12:39.909 ] 00:12:39.909 }' 00:12:39.909 07:58:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:39.909 07:58:45 -- common/autotest_common.sh@10 -- # set +x 00:12:40.476 07:58:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:40.735 [2024-07-13 07:58:46.360723] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:40.735 [2024-07-13 07:58:46.360781] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000027680 00:12:40.735 [2024-07-13 07:58:46.360789] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:40.735 [2024-07-13 07:58:46.360882] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:12:40.735 [2024-07-13 07:58:46.361134] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000027680 00:12:40.735 [2024-07-13 07:58:46.361144] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000027680 00:12:40.735 [2024-07-13 07:58:46.361281] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.735 BaseBdev3 00:12:40.735 07:58:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:12:40.735 07:58:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:12:40.735 07:58:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:40.735 07:58:46 -- common/autotest_common.sh@889 -- # local i 00:12:40.735 07:58:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:40.735 07:58:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:40.735 07:58:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:40.995 07:58:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:40.995 [ 00:12:40.995 { 00:12:40.995 "name": "BaseBdev3", 00:12:40.995 "aliases": [ 00:12:40.995 "a2cbc5b8-3f1d-4e8d-89f2-0d91d0093508" 00:12:40.995 ], 00:12:40.995 "product_name": "Malloc disk", 00:12:40.995 "block_size": 512, 00:12:40.995 "num_blocks": 65536, 00:12:40.995 "uuid": "a2cbc5b8-3f1d-4e8d-89f2-0d91d0093508", 00:12:40.995 "assigned_rate_limits": { 00:12:40.995 "rw_ios_per_sec": 0, 00:12:40.995 "rw_mbytes_per_sec": 0, 00:12:40.995 "r_mbytes_per_sec": 0, 00:12:40.995 "w_mbytes_per_sec": 0 00:12:40.995 }, 00:12:40.995 "claimed": true, 00:12:40.995 "claim_type": "exclusive_write", 00:12:40.995 "zoned": false, 00:12:40.995 "supported_io_types": { 00:12:40.995 "read": true, 00:12:40.995 "write": true, 00:12:40.995 "unmap": true, 00:12:40.995 "write_zeroes": true, 00:12:40.995 "flush": true, 00:12:40.995 "reset": true, 00:12:40.995 "compare": false, 00:12:40.995 "compare_and_write": false, 00:12:40.995 "abort": true, 00:12:40.995 "nvme_admin": false, 00:12:40.995 "nvme_io": false 00:12:40.995 }, 00:12:40.995 "memory_domains": [ 00:12:40.995 { 00:12:40.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.995 "dma_device_type": 2 00:12:40.995 } 00:12:40.995 ], 00:12:40.995 "driver_specific": {} 00:12:40.995 } 00:12:40.995 ] 00:12:40.995 07:58:46 -- common/autotest_common.sh@895 -- # return 0 00:12:40.995 07:58:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:40.995 07:58:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:40.995 07:58:46 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:40.995 07:58:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:40.995 07:58:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:40.995 07:58:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:40.995 07:58:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:40.995 07:58:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:40.995 07:58:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:40.995 07:58:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:40.995 07:58:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:40.995 07:58:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:40.995 07:58:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.995 07:58:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:41.253 07:58:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:41.253 "name": "Existed_Raid", 00:12:41.253 "uuid": "9b26ee67-a82e-4aa0-93a2-f33d0bf308a3", 00:12:41.253 "strip_size_kb": 0, 00:12:41.253 "state": "online", 00:12:41.253 "raid_level": "raid1", 00:12:41.253 "superblock": false, 00:12:41.253 "num_base_bdevs": 3, 00:12:41.253 "num_base_bdevs_discovered": 3, 00:12:41.253 "num_base_bdevs_operational": 3, 00:12:41.253 "base_bdevs_list": [ 00:12:41.253 { 00:12:41.253 "name": "BaseBdev1", 00:12:41.253 "uuid": "9d33912e-b4e5-4c4e-b37e-f402a795c7fd", 00:12:41.253 "is_configured": true, 00:12:41.253 "data_offset": 0, 00:12:41.253 "data_size": 65536 00:12:41.253 }, 00:12:41.253 { 00:12:41.253 "name": "BaseBdev2", 00:12:41.253 "uuid": "cfaccaa0-d708-4cba-91bb-5d115f8c93a5", 00:12:41.253 "is_configured": true, 00:12:41.253 "data_offset": 0, 00:12:41.253 "data_size": 65536 00:12:41.253 }, 00:12:41.253 { 00:12:41.253 "name": "BaseBdev3", 00:12:41.253 "uuid": "a2cbc5b8-3f1d-4e8d-89f2-0d91d0093508", 00:12:41.253 "is_configured": true, 00:12:41.253 "data_offset": 0, 00:12:41.253 "data_size": 65536 00:12:41.253 } 00:12:41.253 ] 00:12:41.253 }' 00:12:41.253 07:58:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:41.253 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:41.820 07:58:47 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:41.820 [2024-07-13 07:58:47.572952] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:41.820 07:58:47 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:12:41.820 07:58:47 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:12:41.820 07:58:47 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:12:41.820 07:58:47 -- bdev/bdev_raid.sh@196 -- # return 0 00:12:41.820 07:58:47 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:12:41.820 07:58:47 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:41.820 07:58:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:41.820 07:58:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:41.820 07:58:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:41.820 07:58:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:41.820 07:58:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:41.820 07:58:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:41.820 07:58:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:41.820 07:58:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:41.820 07:58:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:41.820 07:58:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.820 07:58:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.079 07:58:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:42.079 "name": "Existed_Raid", 00:12:42.079 "uuid": "9b26ee67-a82e-4aa0-93a2-f33d0bf308a3", 00:12:42.079 "strip_size_kb": 0, 00:12:42.079 "state": "online", 00:12:42.079 "raid_level": "raid1", 00:12:42.079 "superblock": false, 00:12:42.079 "num_base_bdevs": 3, 00:12:42.079 "num_base_bdevs_discovered": 2, 00:12:42.079 "num_base_bdevs_operational": 2, 00:12:42.079 "base_bdevs_list": [ 00:12:42.079 { 00:12:42.079 "name": null, 00:12:42.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.079 "is_configured": false, 00:12:42.079 "data_offset": 0, 00:12:42.079 "data_size": 65536 00:12:42.079 }, 00:12:42.079 { 00:12:42.079 "name": "BaseBdev2", 00:12:42.079 "uuid": "cfaccaa0-d708-4cba-91bb-5d115f8c93a5", 00:12:42.079 "is_configured": true, 00:12:42.079 "data_offset": 0, 00:12:42.079 "data_size": 65536 00:12:42.079 }, 00:12:42.079 { 00:12:42.079 "name": "BaseBdev3", 00:12:42.079 "uuid": "a2cbc5b8-3f1d-4e8d-89f2-0d91d0093508", 00:12:42.079 "is_configured": true, 00:12:42.079 "data_offset": 0, 00:12:42.079 "data_size": 65536 00:12:42.079 } 00:12:42.079 ] 00:12:42.079 }' 00:12:42.079 07:58:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:42.079 07:58:47 -- common/autotest_common.sh@10 -- # set +x 00:12:42.646 07:58:48 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:12:42.646 07:58:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:42.646 07:58:48 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:42.646 07:58:48 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.646 07:58:48 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:42.646 07:58:48 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:42.646 07:58:48 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:42.905 [2024-07-13 07:58:48.539006] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:42.905 07:58:48 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:42.905 07:58:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:42.905 07:58:48 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:42.905 07:58:48 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:43.163 07:58:48 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:43.163 07:58:48 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:43.163 07:58:48 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:43.163 [2024-07-13 07:58:48.861274] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:43.163 [2024-07-13 07:58:48.861314] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:43.163 [2024-07-13 07:58:48.861368] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:43.163 [2024-07-13 07:58:48.875037] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:43.163 [2024-07-13 07:58:48.875078] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027680 name Existed_Raid, state offline 00:12:43.163 07:58:48 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:43.163 07:58:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:43.163 07:58:48 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:12:43.163 07:58:48 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:43.421 07:58:49 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:12:43.421 07:58:49 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:12:43.421 07:58:49 -- bdev/bdev_raid.sh@287 -- # killprocess 62143 00:12:43.421 07:58:49 -- common/autotest_common.sh@926 -- # '[' -z 62143 ']' 00:12:43.421 07:58:49 -- common/autotest_common.sh@930 -- # kill -0 62143 00:12:43.421 07:58:49 -- common/autotest_common.sh@931 -- # uname 00:12:43.421 07:58:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:43.421 07:58:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62143 00:12:43.421 killing process with pid 62143 00:12:43.421 07:58:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:43.421 07:58:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:43.421 07:58:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62143' 00:12:43.421 07:58:49 -- common/autotest_common.sh@945 -- # kill 62143 00:12:43.421 07:58:49 -- common/autotest_common.sh@950 -- # wait 62143 00:12:43.421 [2024-07-13 07:58:49.071213] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:43.421 [2024-07-13 07:58:49.071279] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:43.680 ************************************ 00:12:43.680 END TEST raid_state_function_test 00:12:43.680 ************************************ 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@289 -- # return 0 00:12:43.680 00:12:43.680 real 0m8.959s 00:12:43.680 user 0m16.253s 00:12:43.680 sys 0m1.245s 00:12:43.680 07:58:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:43.680 07:58:49 -- common/autotest_common.sh@10 -- # set +x 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:12:43.680 07:58:49 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:43.680 07:58:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:43.680 07:58:49 -- common/autotest_common.sh@10 -- # set +x 00:12:43.680 ************************************ 00:12:43.680 START TEST raid_state_function_test_sb 00:12:43.680 ************************************ 00:12:43.680 07:58:49 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 true 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:12:43.680 Process raid pid: 62488 00:12:43.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@226 -- # raid_pid=62488 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 62488' 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@228 -- # waitforlisten 62488 /var/tmp/spdk-raid.sock 00:12:43.680 07:58:49 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:43.680 07:58:49 -- common/autotest_common.sh@819 -- # '[' -z 62488 ']' 00:12:43.680 07:58:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:43.680 07:58:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:43.680 07:58:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:43.680 07:58:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:43.680 07:58:49 -- common/autotest_common.sh@10 -- # set +x 00:12:43.680 [2024-07-13 07:58:49.470716] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:43.681 [2024-07-13 07:58:49.470900] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.939 [2024-07-13 07:58:49.611335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.939 [2024-07-13 07:58:49.662277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.939 [2024-07-13 07:58:49.707809] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.898 07:58:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:44.898 07:58:50 -- common/autotest_common.sh@852 -- # return 0 00:12:44.898 07:58:50 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:44.898 [2024-07-13 07:58:50.475027] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:44.898 [2024-07-13 07:58:50.475157] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:44.898 [2024-07-13 07:58:50.475177] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:44.898 [2024-07-13 07:58:50.475225] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:44.898 [2024-07-13 07:58:50.475237] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:44.898 [2024-07-13 07:58:50.475307] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:44.898 07:58:50 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:44.898 07:58:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:44.898 07:58:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:44.898 07:58:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:44.898 07:58:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:44.898 07:58:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:44.898 07:58:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:44.898 07:58:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:44.898 07:58:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:44.898 07:58:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:44.898 07:58:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:44.898 07:58:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.156 07:58:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:45.156 "name": "Existed_Raid", 00:12:45.156 "uuid": "fc5ccf45-c3c1-4f4a-a447-fb0740e57a29", 00:12:45.156 "strip_size_kb": 0, 00:12:45.156 "state": "configuring", 00:12:45.156 "raid_level": "raid1", 00:12:45.156 "superblock": true, 00:12:45.156 "num_base_bdevs": 3, 00:12:45.156 "num_base_bdevs_discovered": 0, 00:12:45.156 "num_base_bdevs_operational": 3, 00:12:45.156 "base_bdevs_list": [ 00:12:45.156 { 00:12:45.156 "name": "BaseBdev1", 00:12:45.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.156 "is_configured": false, 00:12:45.156 "data_offset": 0, 00:12:45.156 "data_size": 0 00:12:45.156 }, 00:12:45.156 { 00:12:45.156 "name": "BaseBdev2", 00:12:45.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.156 "is_configured": false, 00:12:45.156 "data_offset": 0, 00:12:45.156 "data_size": 0 00:12:45.156 }, 00:12:45.156 { 00:12:45.156 "name": "BaseBdev3", 00:12:45.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.156 "is_configured": false, 00:12:45.156 "data_offset": 0, 00:12:45.156 "data_size": 0 00:12:45.156 } 00:12:45.156 ] 00:12:45.156 }' 00:12:45.156 07:58:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:45.156 07:58:50 -- common/autotest_common.sh@10 -- # set +x 00:12:45.722 07:58:51 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:45.722 [2024-07-13 07:58:51.426851] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:45.722 [2024-07-13 07:58:51.426919] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000025880 name Existed_Raid, state configuring 00:12:45.722 07:58:51 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:45.980 [2024-07-13 07:58:51.586959] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:45.980 [2024-07-13 07:58:51.587042] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:45.980 [2024-07-13 07:58:51.587053] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:45.980 [2024-07-13 07:58:51.587074] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:45.980 [2024-07-13 07:58:51.587081] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:45.980 [2024-07-13 07:58:51.587108] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:45.980 07:58:51 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:45.980 [2024-07-13 07:58:51.760399] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.980 BaseBdev1 00:12:45.980 07:58:51 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:12:45.980 07:58:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:12:45.980 07:58:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:45.980 07:58:51 -- common/autotest_common.sh@889 -- # local i 00:12:45.980 07:58:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:45.980 07:58:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:45.980 07:58:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:46.238 07:58:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:46.497 [ 00:12:46.497 { 00:12:46.497 "name": "BaseBdev1", 00:12:46.497 "aliases": [ 00:12:46.497 "96dbf3b8-78bc-400d-bfee-807785128f5e" 00:12:46.497 ], 00:12:46.497 "product_name": "Malloc disk", 00:12:46.497 "block_size": 512, 00:12:46.497 "num_blocks": 65536, 00:12:46.497 "uuid": "96dbf3b8-78bc-400d-bfee-807785128f5e", 00:12:46.497 "assigned_rate_limits": { 00:12:46.497 "rw_ios_per_sec": 0, 00:12:46.497 "rw_mbytes_per_sec": 0, 00:12:46.497 "r_mbytes_per_sec": 0, 00:12:46.497 "w_mbytes_per_sec": 0 00:12:46.497 }, 00:12:46.497 "claimed": true, 00:12:46.497 "claim_type": "exclusive_write", 00:12:46.497 "zoned": false, 00:12:46.497 "supported_io_types": { 00:12:46.497 "read": true, 00:12:46.497 "write": true, 00:12:46.497 "unmap": true, 00:12:46.497 "write_zeroes": true, 00:12:46.497 "flush": true, 00:12:46.497 "reset": true, 00:12:46.497 "compare": false, 00:12:46.497 "compare_and_write": false, 00:12:46.497 "abort": true, 00:12:46.497 "nvme_admin": false, 00:12:46.497 "nvme_io": false 00:12:46.497 }, 00:12:46.497 "memory_domains": [ 00:12:46.497 { 00:12:46.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.497 "dma_device_type": 2 00:12:46.497 } 00:12:46.497 ], 00:12:46.497 "driver_specific": {} 00:12:46.497 } 00:12:46.497 ] 00:12:46.497 07:58:52 -- common/autotest_common.sh@895 -- # return 0 00:12:46.497 07:58:52 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:46.497 07:58:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:46.497 07:58:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:46.497 07:58:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:46.497 07:58:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:46.497 07:58:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:46.497 07:58:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:46.497 07:58:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:46.497 07:58:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:46.497 07:58:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:46.497 07:58:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.497 07:58:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.497 07:58:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:46.497 "name": "Existed_Raid", 00:12:46.497 "uuid": "c24db05d-fab7-40c1-9ae2-497781016421", 00:12:46.497 "strip_size_kb": 0, 00:12:46.497 "state": "configuring", 00:12:46.497 "raid_level": "raid1", 00:12:46.497 "superblock": true, 00:12:46.497 "num_base_bdevs": 3, 00:12:46.497 "num_base_bdevs_discovered": 1, 00:12:46.497 "num_base_bdevs_operational": 3, 00:12:46.497 "base_bdevs_list": [ 00:12:46.497 { 00:12:46.497 "name": "BaseBdev1", 00:12:46.497 "uuid": "96dbf3b8-78bc-400d-bfee-807785128f5e", 00:12:46.497 "is_configured": true, 00:12:46.497 "data_offset": 2048, 00:12:46.497 "data_size": 63488 00:12:46.497 }, 00:12:46.497 { 00:12:46.497 "name": "BaseBdev2", 00:12:46.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.497 "is_configured": false, 00:12:46.497 "data_offset": 0, 00:12:46.497 "data_size": 0 00:12:46.497 }, 00:12:46.497 { 00:12:46.497 "name": "BaseBdev3", 00:12:46.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.497 "is_configured": false, 00:12:46.497 "data_offset": 0, 00:12:46.497 "data_size": 0 00:12:46.497 } 00:12:46.497 ] 00:12:46.497 }' 00:12:46.497 07:58:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:46.497 07:58:52 -- common/autotest_common.sh@10 -- # set +x 00:12:47.065 07:58:52 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:47.323 [2024-07-13 07:58:52.884558] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:47.323 [2024-07-13 07:58:52.884632] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026480 name Existed_Raid, state configuring 00:12:47.323 07:58:52 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:12:47.323 07:58:52 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:47.581 07:58:53 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:47.581 BaseBdev1 00:12:47.581 07:58:53 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:12:47.581 07:58:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:12:47.581 07:58:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:47.581 07:58:53 -- common/autotest_common.sh@889 -- # local i 00:12:47.581 07:58:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:47.581 07:58:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:47.581 07:58:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:47.839 07:58:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:47.839 [ 00:12:47.839 { 00:12:47.839 "name": "BaseBdev1", 00:12:47.839 "aliases": [ 00:12:47.839 "02d6e3b5-9fa9-45d8-8d2c-cc26ff7cc4a2" 00:12:47.839 ], 00:12:47.839 "product_name": "Malloc disk", 00:12:47.839 "block_size": 512, 00:12:47.839 "num_blocks": 65536, 00:12:47.839 "uuid": "02d6e3b5-9fa9-45d8-8d2c-cc26ff7cc4a2", 00:12:47.839 "assigned_rate_limits": { 00:12:47.839 "rw_ios_per_sec": 0, 00:12:47.839 "rw_mbytes_per_sec": 0, 00:12:47.839 "r_mbytes_per_sec": 0, 00:12:47.839 "w_mbytes_per_sec": 0 00:12:47.839 }, 00:12:47.839 "claimed": false, 00:12:47.839 "zoned": false, 00:12:47.839 "supported_io_types": { 00:12:47.839 "read": true, 00:12:47.839 "write": true, 00:12:47.839 "unmap": true, 00:12:47.839 "write_zeroes": true, 00:12:47.839 "flush": true, 00:12:47.839 "reset": true, 00:12:47.839 "compare": false, 00:12:47.839 "compare_and_write": false, 00:12:47.839 "abort": true, 00:12:47.839 "nvme_admin": false, 00:12:47.839 "nvme_io": false 00:12:47.839 }, 00:12:47.839 "memory_domains": [ 00:12:47.839 { 00:12:47.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.839 "dma_device_type": 2 00:12:47.839 } 00:12:47.839 ], 00:12:47.839 "driver_specific": {} 00:12:47.839 } 00:12:47.839 ] 00:12:47.839 07:58:53 -- common/autotest_common.sh@895 -- # return 0 00:12:47.839 07:58:53 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:48.097 [2024-07-13 07:58:53.807509] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.097 [2024-07-13 07:58:53.809569] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:48.097 [2024-07-13 07:58:53.809631] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.097 [2024-07-13 07:58:53.809642] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:48.097 [2024-07-13 07:58:53.809666] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:48.097 07:58:53 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:12:48.097 07:58:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:48.097 07:58:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:48.097 07:58:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:48.097 07:58:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:48.097 07:58:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:48.097 07:58:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:48.097 07:58:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:48.097 07:58:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:48.097 07:58:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:48.097 07:58:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:48.097 07:58:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:48.097 07:58:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.097 07:58:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:48.356 07:58:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:48.356 "name": "Existed_Raid", 00:12:48.356 "uuid": "df4f2e8d-d18b-48cd-a583-33d18e936fc6", 00:12:48.356 "strip_size_kb": 0, 00:12:48.356 "state": "configuring", 00:12:48.356 "raid_level": "raid1", 00:12:48.356 "superblock": true, 00:12:48.356 "num_base_bdevs": 3, 00:12:48.356 "num_base_bdevs_discovered": 1, 00:12:48.356 "num_base_bdevs_operational": 3, 00:12:48.356 "base_bdevs_list": [ 00:12:48.356 { 00:12:48.356 "name": "BaseBdev1", 00:12:48.356 "uuid": "02d6e3b5-9fa9-45d8-8d2c-cc26ff7cc4a2", 00:12:48.356 "is_configured": true, 00:12:48.356 "data_offset": 2048, 00:12:48.356 "data_size": 63488 00:12:48.356 }, 00:12:48.356 { 00:12:48.356 "name": "BaseBdev2", 00:12:48.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.356 "is_configured": false, 00:12:48.356 "data_offset": 0, 00:12:48.356 "data_size": 0 00:12:48.356 }, 00:12:48.356 { 00:12:48.356 "name": "BaseBdev3", 00:12:48.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.356 "is_configured": false, 00:12:48.356 "data_offset": 0, 00:12:48.356 "data_size": 0 00:12:48.356 } 00:12:48.356 ] 00:12:48.356 }' 00:12:48.356 07:58:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:48.356 07:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:48.923 07:58:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:49.182 BaseBdev2 00:12:49.182 [2024-07-13 07:58:54.822681] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.182 07:58:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:12:49.182 07:58:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:12:49.182 07:58:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:49.182 07:58:54 -- common/autotest_common.sh@889 -- # local i 00:12:49.182 07:58:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:49.182 07:58:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:49.182 07:58:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:49.441 07:58:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:49.700 [ 00:12:49.700 { 00:12:49.700 "name": "BaseBdev2", 00:12:49.700 "aliases": [ 00:12:49.700 "929666a2-c30e-48e0-9d2a-c93c19ecf9a9" 00:12:49.700 ], 00:12:49.700 "product_name": "Malloc disk", 00:12:49.700 "block_size": 512, 00:12:49.700 "num_blocks": 65536, 00:12:49.700 "uuid": "929666a2-c30e-48e0-9d2a-c93c19ecf9a9", 00:12:49.700 "assigned_rate_limits": { 00:12:49.700 "rw_ios_per_sec": 0, 00:12:49.700 "rw_mbytes_per_sec": 0, 00:12:49.700 "r_mbytes_per_sec": 0, 00:12:49.700 "w_mbytes_per_sec": 0 00:12:49.700 }, 00:12:49.700 "claimed": true, 00:12:49.700 "claim_type": "exclusive_write", 00:12:49.700 "zoned": false, 00:12:49.700 "supported_io_types": { 00:12:49.700 "read": true, 00:12:49.700 "write": true, 00:12:49.700 "unmap": true, 00:12:49.700 "write_zeroes": true, 00:12:49.700 "flush": true, 00:12:49.700 "reset": true, 00:12:49.700 "compare": false, 00:12:49.700 "compare_and_write": false, 00:12:49.700 "abort": true, 00:12:49.700 "nvme_admin": false, 00:12:49.700 "nvme_io": false 00:12:49.700 }, 00:12:49.700 "memory_domains": [ 00:12:49.700 { 00:12:49.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.700 "dma_device_type": 2 00:12:49.700 } 00:12:49.700 ], 00:12:49.700 "driver_specific": {} 00:12:49.700 } 00:12:49.700 ] 00:12:49.700 07:58:55 -- common/autotest_common.sh@895 -- # return 0 00:12:49.700 07:58:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:49.700 07:58:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:49.700 07:58:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:49.700 07:58:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:49.700 07:58:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:49.700 07:58:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:49.700 07:58:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:49.700 07:58:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:49.700 07:58:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:49.700 07:58:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:49.700 07:58:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:49.700 07:58:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:49.700 07:58:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.700 07:58:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.700 07:58:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:49.700 "name": "Existed_Raid", 00:12:49.700 "uuid": "df4f2e8d-d18b-48cd-a583-33d18e936fc6", 00:12:49.700 "strip_size_kb": 0, 00:12:49.700 "state": "configuring", 00:12:49.700 "raid_level": "raid1", 00:12:49.700 "superblock": true, 00:12:49.700 "num_base_bdevs": 3, 00:12:49.700 "num_base_bdevs_discovered": 2, 00:12:49.700 "num_base_bdevs_operational": 3, 00:12:49.700 "base_bdevs_list": [ 00:12:49.700 { 00:12:49.700 "name": "BaseBdev1", 00:12:49.700 "uuid": "02d6e3b5-9fa9-45d8-8d2c-cc26ff7cc4a2", 00:12:49.700 "is_configured": true, 00:12:49.700 "data_offset": 2048, 00:12:49.700 "data_size": 63488 00:12:49.700 }, 00:12:49.700 { 00:12:49.700 "name": "BaseBdev2", 00:12:49.700 "uuid": "929666a2-c30e-48e0-9d2a-c93c19ecf9a9", 00:12:49.700 "is_configured": true, 00:12:49.700 "data_offset": 2048, 00:12:49.700 "data_size": 63488 00:12:49.700 }, 00:12:49.700 { 00:12:49.700 "name": "BaseBdev3", 00:12:49.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.700 "is_configured": false, 00:12:49.700 "data_offset": 0, 00:12:49.700 "data_size": 0 00:12:49.700 } 00:12:49.700 ] 00:12:49.700 }' 00:12:49.700 07:58:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:49.700 07:58:55 -- common/autotest_common.sh@10 -- # set +x 00:12:50.636 07:58:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:50.636 [2024-07-13 07:58:56.258090] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:50.636 [2024-07-13 07:58:56.258250] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000027c80 00:12:50.636 [2024-07-13 07:58:56.258262] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:50.636 [2024-07-13 07:58:56.258358] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:12:50.636 BaseBdev3 00:12:50.636 [2024-07-13 07:58:56.258922] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000027c80 00:12:50.636 [2024-07-13 07:58:56.258940] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000027c80 00:12:50.636 [2024-07-13 07:58:56.259049] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.636 07:58:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:12:50.636 07:58:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:12:50.636 07:58:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:50.636 07:58:56 -- common/autotest_common.sh@889 -- # local i 00:12:50.636 07:58:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:50.636 07:58:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:50.636 07:58:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:50.894 07:58:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:51.153 [ 00:12:51.153 { 00:12:51.153 "name": "BaseBdev3", 00:12:51.153 "aliases": [ 00:12:51.153 "b3884d41-d285-4129-8dd7-3d48a14552f8" 00:12:51.153 ], 00:12:51.153 "product_name": "Malloc disk", 00:12:51.153 "block_size": 512, 00:12:51.153 "num_blocks": 65536, 00:12:51.153 "uuid": "b3884d41-d285-4129-8dd7-3d48a14552f8", 00:12:51.153 "assigned_rate_limits": { 00:12:51.153 "rw_ios_per_sec": 0, 00:12:51.153 "rw_mbytes_per_sec": 0, 00:12:51.153 "r_mbytes_per_sec": 0, 00:12:51.153 "w_mbytes_per_sec": 0 00:12:51.153 }, 00:12:51.153 "claimed": true, 00:12:51.153 "claim_type": "exclusive_write", 00:12:51.153 "zoned": false, 00:12:51.153 "supported_io_types": { 00:12:51.153 "read": true, 00:12:51.153 "write": true, 00:12:51.153 "unmap": true, 00:12:51.153 "write_zeroes": true, 00:12:51.153 "flush": true, 00:12:51.153 "reset": true, 00:12:51.153 "compare": false, 00:12:51.153 "compare_and_write": false, 00:12:51.153 "abort": true, 00:12:51.153 "nvme_admin": false, 00:12:51.153 "nvme_io": false 00:12:51.153 }, 00:12:51.153 "memory_domains": [ 00:12:51.153 { 00:12:51.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.153 "dma_device_type": 2 00:12:51.153 } 00:12:51.153 ], 00:12:51.153 "driver_specific": {} 00:12:51.153 } 00:12:51.153 ] 00:12:51.153 07:58:56 -- common/autotest_common.sh@895 -- # return 0 00:12:51.153 07:58:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:51.153 07:58:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:51.153 07:58:56 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:51.153 07:58:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:51.153 07:58:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:51.153 07:58:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:51.153 07:58:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:51.153 07:58:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:51.153 07:58:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:51.153 07:58:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:51.153 07:58:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:51.153 07:58:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:51.153 07:58:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.153 07:58:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.153 07:58:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:51.153 "name": "Existed_Raid", 00:12:51.153 "uuid": "df4f2e8d-d18b-48cd-a583-33d18e936fc6", 00:12:51.153 "strip_size_kb": 0, 00:12:51.153 "state": "online", 00:12:51.153 "raid_level": "raid1", 00:12:51.153 "superblock": true, 00:12:51.153 "num_base_bdevs": 3, 00:12:51.153 "num_base_bdevs_discovered": 3, 00:12:51.153 "num_base_bdevs_operational": 3, 00:12:51.153 "base_bdevs_list": [ 00:12:51.153 { 00:12:51.153 "name": "BaseBdev1", 00:12:51.153 "uuid": "02d6e3b5-9fa9-45d8-8d2c-cc26ff7cc4a2", 00:12:51.153 "is_configured": true, 00:12:51.153 "data_offset": 2048, 00:12:51.153 "data_size": 63488 00:12:51.153 }, 00:12:51.153 { 00:12:51.153 "name": "BaseBdev2", 00:12:51.153 "uuid": "929666a2-c30e-48e0-9d2a-c93c19ecf9a9", 00:12:51.153 "is_configured": true, 00:12:51.153 "data_offset": 2048, 00:12:51.153 "data_size": 63488 00:12:51.153 }, 00:12:51.153 { 00:12:51.153 "name": "BaseBdev3", 00:12:51.153 "uuid": "b3884d41-d285-4129-8dd7-3d48a14552f8", 00:12:51.153 "is_configured": true, 00:12:51.153 "data_offset": 2048, 00:12:51.153 "data_size": 63488 00:12:51.153 } 00:12:51.153 ] 00:12:51.153 }' 00:12:51.153 07:58:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:51.153 07:58:56 -- common/autotest_common.sh@10 -- # set +x 00:12:51.720 07:58:57 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:51.979 [2024-07-13 07:58:57.594498] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:51.979 07:58:57 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:12:51.979 07:58:57 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:12:51.979 07:58:57 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:12:51.979 07:58:57 -- bdev/bdev_raid.sh@196 -- # return 0 00:12:51.979 07:58:57 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:12:51.979 07:58:57 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:51.979 07:58:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:51.979 07:58:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:51.979 07:58:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:51.979 07:58:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:51.979 07:58:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:51.979 07:58:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:51.979 07:58:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:51.979 07:58:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:51.979 07:58:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:51.979 07:58:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.979 07:58:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.240 07:58:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:52.240 "name": "Existed_Raid", 00:12:52.240 "uuid": "df4f2e8d-d18b-48cd-a583-33d18e936fc6", 00:12:52.240 "strip_size_kb": 0, 00:12:52.240 "state": "online", 00:12:52.240 "raid_level": "raid1", 00:12:52.240 "superblock": true, 00:12:52.240 "num_base_bdevs": 3, 00:12:52.240 "num_base_bdevs_discovered": 2, 00:12:52.240 "num_base_bdevs_operational": 2, 00:12:52.240 "base_bdevs_list": [ 00:12:52.240 { 00:12:52.240 "name": null, 00:12:52.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.240 "is_configured": false, 00:12:52.240 "data_offset": 2048, 00:12:52.240 "data_size": 63488 00:12:52.240 }, 00:12:52.240 { 00:12:52.240 "name": "BaseBdev2", 00:12:52.240 "uuid": "929666a2-c30e-48e0-9d2a-c93c19ecf9a9", 00:12:52.240 "is_configured": true, 00:12:52.240 "data_offset": 2048, 00:12:52.240 "data_size": 63488 00:12:52.240 }, 00:12:52.240 { 00:12:52.240 "name": "BaseBdev3", 00:12:52.240 "uuid": "b3884d41-d285-4129-8dd7-3d48a14552f8", 00:12:52.240 "is_configured": true, 00:12:52.240 "data_offset": 2048, 00:12:52.240 "data_size": 63488 00:12:52.240 } 00:12:52.240 ] 00:12:52.240 }' 00:12:52.240 07:58:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:52.240 07:58:57 -- common/autotest_common.sh@10 -- # set +x 00:12:52.807 07:58:58 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:12:52.807 07:58:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:52.807 07:58:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.807 07:58:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:53.066 07:58:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:53.066 07:58:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:53.066 07:58:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:53.066 [2024-07-13 07:58:58.875791] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:53.324 07:58:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:53.324 07:58:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:53.324 07:58:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:53.324 07:58:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:53.324 07:58:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:53.324 07:58:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:53.324 07:58:59 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:53.582 [2024-07-13 07:58:59.274599] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:53.582 [2024-07-13 07:58:59.274644] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:53.582 [2024-07-13 07:58:59.274707] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:53.582 [2024-07-13 07:58:59.287140] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:53.583 [2024-07-13 07:58:59.287197] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027c80 name Existed_Raid, state offline 00:12:53.583 07:58:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:53.583 07:58:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:53.583 07:58:59 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:53.583 07:58:59 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:12:53.846 07:58:59 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:12:53.846 07:58:59 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:12:53.846 07:58:59 -- bdev/bdev_raid.sh@287 -- # killprocess 62488 00:12:53.846 07:58:59 -- common/autotest_common.sh@926 -- # '[' -z 62488 ']' 00:12:53.846 07:58:59 -- common/autotest_common.sh@930 -- # kill -0 62488 00:12:53.846 07:58:59 -- common/autotest_common.sh@931 -- # uname 00:12:53.846 07:58:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:53.846 07:58:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62488 00:12:53.846 killing process with pid 62488 00:12:53.847 07:58:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:53.847 07:58:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:53.847 07:58:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62488' 00:12:53.847 07:58:59 -- common/autotest_common.sh@945 -- # kill 62488 00:12:53.847 07:58:59 -- common/autotest_common.sh@950 -- # wait 62488 00:12:53.847 [2024-07-13 07:58:59.485114] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:53.847 [2024-07-13 07:58:59.485223] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:54.130 07:58:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:12:54.130 00:12:54.130 real 0m10.486s 00:12:54.130 user 0m18.916s 00:12:54.130 sys 0m1.476s 00:12:54.130 ************************************ 00:12:54.130 END TEST raid_state_function_test_sb 00:12:54.130 ************************************ 00:12:54.130 07:58:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:54.130 07:58:59 -- common/autotest_common.sh@10 -- # set +x 00:12:54.130 07:58:59 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:12:54.130 07:58:59 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:54.130 07:58:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:54.130 07:58:59 -- common/autotest_common.sh@10 -- # set +x 00:12:54.130 ************************************ 00:12:54.130 START TEST raid_superblock_test 00:12:54.130 ************************************ 00:12:54.130 07:58:59 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 3 00:12:54.130 07:58:59 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:12:54.130 07:58:59 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:12:54.130 07:58:59 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:12:54.130 07:58:59 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:12:54.130 07:58:59 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:12:54.130 07:58:59 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:12:54.130 07:58:59 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:12:54.130 07:58:59 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:12:54.130 07:58:59 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:12:54.130 07:58:59 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:12:54.130 07:58:59 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:12:54.130 07:58:59 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:12:54.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:54.130 07:58:59 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:12:54.130 07:58:59 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:12:54.130 07:58:59 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:12:54.130 07:58:59 -- bdev/bdev_raid.sh@357 -- # raid_pid=62857 00:12:54.130 07:58:59 -- bdev/bdev_raid.sh@358 -- # waitforlisten 62857 /var/tmp/spdk-raid.sock 00:12:54.130 07:58:59 -- common/autotest_common.sh@819 -- # '[' -z 62857 ']' 00:12:54.130 07:58:59 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:54.130 07:58:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:54.130 07:58:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:54.130 07:58:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:54.130 07:58:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:54.130 07:58:59 -- common/autotest_common.sh@10 -- # set +x 00:12:54.388 [2024-07-13 07:58:59.998915] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:54.388 [2024-07-13 07:58:59.999163] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62857 ] 00:12:54.388 [2024-07-13 07:59:00.152129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.647 [2024-07-13 07:59:00.208918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.647 [2024-07-13 07:59:00.259364] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.213 07:59:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:55.213 07:59:00 -- common/autotest_common.sh@852 -- # return 0 00:12:55.213 07:59:00 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:12:55.213 07:59:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:55.213 07:59:00 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:12:55.213 07:59:00 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:12:55.213 07:59:00 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:55.213 07:59:00 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:55.213 07:59:00 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:12:55.213 07:59:00 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:55.213 07:59:00 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:55.213 malloc1 00:12:55.213 07:59:00 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:55.471 [2024-07-13 07:59:01.087468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:55.471 [2024-07-13 07:59:01.087550] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.471 [2024-07-13 07:59:01.087612] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000026180 00:12:55.471 [2024-07-13 07:59:01.087653] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.471 [2024-07-13 07:59:01.089363] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.471 [2024-07-13 07:59:01.089404] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:55.471 pt1 00:12:55.471 07:59:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:12:55.472 07:59:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:55.472 07:59:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:12:55.472 07:59:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:12:55.472 07:59:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:55.472 07:59:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:55.472 07:59:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:12:55.472 07:59:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:55.472 07:59:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:55.472 malloc2 00:12:55.472 07:59:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:55.731 [2024-07-13 07:59:01.380309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:55.731 [2024-07-13 07:59:01.380391] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.731 [2024-07-13 07:59:01.380437] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027f80 00:12:55.731 [2024-07-13 07:59:01.380662] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.731 pt2 00:12:55.731 [2024-07-13 07:59:01.382171] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.731 [2024-07-13 07:59:01.382215] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:55.731 07:59:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:12:55.731 07:59:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:55.731 07:59:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:12:55.731 07:59:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:12:55.731 07:59:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:55.731 07:59:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:55.731 07:59:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:12:55.731 07:59:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:55.731 07:59:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:12:55.731 malloc3 00:12:55.990 07:59:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:55.990 [2024-07-13 07:59:01.673314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:55.990 [2024-07-13 07:59:01.673395] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.990 [2024-07-13 07:59:01.673440] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029d80 00:12:55.990 [2024-07-13 07:59:01.673705] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.990 pt3 00:12:55.990 [2024-07-13 07:59:01.675169] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.990 [2024-07-13 07:59:01.675222] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:55.990 07:59:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:12:55.990 07:59:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:55.990 07:59:01 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:12:56.249 [2024-07-13 07:59:01.821396] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:56.249 [2024-07-13 07:59:01.825606] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:56.249 [2024-07-13 07:59:01.825751] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:56.249 [2024-07-13 07:59:01.826148] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002b280 00:12:56.249 [2024-07-13 07:59:01.826194] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:56.249 [2024-07-13 07:59:01.826413] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:12:56.249 [2024-07-13 07:59:01.826924] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002b280 00:12:56.249 [2024-07-13 07:59:01.826973] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002b280 00:12:56.249 [2024-07-13 07:59:01.827154] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.249 07:59:01 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:56.249 07:59:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:56.249 07:59:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:56.249 07:59:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:56.249 07:59:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:56.249 07:59:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:56.249 07:59:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:56.249 07:59:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:56.249 07:59:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:56.249 07:59:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:56.249 07:59:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.249 07:59:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.249 07:59:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:56.249 "name": "raid_bdev1", 00:12:56.249 "uuid": "8988b490-e6a4-43dc-9a7c-afb534f6d73e", 00:12:56.249 "strip_size_kb": 0, 00:12:56.249 "state": "online", 00:12:56.249 "raid_level": "raid1", 00:12:56.249 "superblock": true, 00:12:56.249 "num_base_bdevs": 3, 00:12:56.249 "num_base_bdevs_discovered": 3, 00:12:56.249 "num_base_bdevs_operational": 3, 00:12:56.249 "base_bdevs_list": [ 00:12:56.249 { 00:12:56.249 "name": "pt1", 00:12:56.249 "uuid": "49e8af80-54e1-5c1f-a68d-b74627307e57", 00:12:56.249 "is_configured": true, 00:12:56.249 "data_offset": 2048, 00:12:56.249 "data_size": 63488 00:12:56.249 }, 00:12:56.249 { 00:12:56.249 "name": "pt2", 00:12:56.249 "uuid": "ce8e99e2-3abb-567f-918b-163da7e64b5d", 00:12:56.249 "is_configured": true, 00:12:56.249 "data_offset": 2048, 00:12:56.249 "data_size": 63488 00:12:56.249 }, 00:12:56.249 { 00:12:56.249 "name": "pt3", 00:12:56.249 "uuid": "055c8681-9694-5e57-8b29-5cd469c6efd9", 00:12:56.249 "is_configured": true, 00:12:56.249 "data_offset": 2048, 00:12:56.249 "data_size": 63488 00:12:56.249 } 00:12:56.249 ] 00:12:56.249 }' 00:12:56.249 07:59:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:56.249 07:59:02 -- common/autotest_common.sh@10 -- # set +x 00:12:56.817 07:59:02 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:12:56.817 07:59:02 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:57.075 [2024-07-13 07:59:02.778209] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.075 07:59:02 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=8988b490-e6a4-43dc-9a7c-afb534f6d73e 00:12:57.075 07:59:02 -- bdev/bdev_raid.sh@380 -- # '[' -z 8988b490-e6a4-43dc-9a7c-afb534f6d73e ']' 00:12:57.075 07:59:02 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:57.332 [2024-07-13 07:59:02.938110] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:57.332 [2024-07-13 07:59:02.938141] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:57.332 [2024-07-13 07:59:02.938199] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.332 [2024-07-13 07:59:02.938242] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:57.332 [2024-07-13 07:59:02.938251] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002b280 name raid_bdev1, state offline 00:12:57.332 07:59:02 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:12:57.332 07:59:02 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:57.591 07:59:03 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:12:57.591 07:59:03 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:12:57.591 07:59:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:12:57.591 07:59:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:57.849 07:59:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:12:57.849 07:59:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:57.849 07:59:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:12:57.849 07:59:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:58.110 07:59:03 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:58.110 07:59:03 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:58.368 07:59:03 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:12:58.368 07:59:03 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:58.368 07:59:03 -- common/autotest_common.sh@640 -- # local es=0 00:12:58.368 07:59:03 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:58.368 07:59:03 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:58.368 07:59:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:58.368 07:59:03 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:58.368 07:59:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:58.368 07:59:03 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:58.368 07:59:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:58.368 07:59:03 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:58.368 07:59:03 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:58.368 07:59:03 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:58.368 [2024-07-13 07:59:04.099135] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:58.368 [2024-07-13 07:59:04.100654] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:58.368 [2024-07-13 07:59:04.100687] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:58.368 [2024-07-13 07:59:04.100714] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:12:58.368 [2024-07-13 07:59:04.100770] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:12:58.368 [2024-07-13 07:59:04.100792] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:12:58.368 [2024-07-13 07:59:04.100829] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:58.368 [2024-07-13 07:59:04.100839] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002b880 name raid_bdev1, state configuring 00:12:58.368 request: 00:12:58.368 { 00:12:58.368 "name": "raid_bdev1", 00:12:58.368 "raid_level": "raid1", 00:12:58.368 "base_bdevs": [ 00:12:58.368 "malloc1", 00:12:58.368 "malloc2", 00:12:58.368 "malloc3" 00:12:58.368 ], 00:12:58.368 "superblock": false, 00:12:58.368 "method": "bdev_raid_create", 00:12:58.368 "req_id": 1 00:12:58.368 } 00:12:58.368 Got JSON-RPC error response 00:12:58.368 response: 00:12:58.368 { 00:12:58.368 "code": -17, 00:12:58.368 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:58.368 } 00:12:58.368 07:59:04 -- common/autotest_common.sh@643 -- # es=1 00:12:58.368 07:59:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:58.368 07:59:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:58.368 07:59:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:58.368 07:59:04 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:12:58.369 07:59:04 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:58.627 07:59:04 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:12:58.627 07:59:04 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:12:58.627 07:59:04 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:58.627 [2024-07-13 07:59:04.395128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:58.627 [2024-07-13 07:59:04.395189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.627 [2024-07-13 07:59:04.395243] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ca80 00:12:58.627 [2024-07-13 07:59:04.395269] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.627 [2024-07-13 07:59:04.396833] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.627 [2024-07-13 07:59:04.396875] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:58.627 [2024-07-13 07:59:04.396941] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:12:58.627 [2024-07-13 07:59:04.396983] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:58.627 pt1 00:12:58.627 07:59:04 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:58.627 07:59:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:58.627 07:59:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:58.627 07:59:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:58.627 07:59:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:58.627 07:59:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:58.627 07:59:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:58.627 07:59:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:58.627 07:59:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:58.627 07:59:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:58.627 07:59:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:58.627 07:59:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.887 07:59:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:58.887 "name": "raid_bdev1", 00:12:58.887 "uuid": "8988b490-e6a4-43dc-9a7c-afb534f6d73e", 00:12:58.887 "strip_size_kb": 0, 00:12:58.887 "state": "configuring", 00:12:58.887 "raid_level": "raid1", 00:12:58.887 "superblock": true, 00:12:58.887 "num_base_bdevs": 3, 00:12:58.887 "num_base_bdevs_discovered": 1, 00:12:58.887 "num_base_bdevs_operational": 3, 00:12:58.887 "base_bdevs_list": [ 00:12:58.887 { 00:12:58.887 "name": "pt1", 00:12:58.887 "uuid": "49e8af80-54e1-5c1f-a68d-b74627307e57", 00:12:58.887 "is_configured": true, 00:12:58.887 "data_offset": 2048, 00:12:58.887 "data_size": 63488 00:12:58.887 }, 00:12:58.887 { 00:12:58.887 "name": null, 00:12:58.887 "uuid": "ce8e99e2-3abb-567f-918b-163da7e64b5d", 00:12:58.887 "is_configured": false, 00:12:58.887 "data_offset": 2048, 00:12:58.887 "data_size": 63488 00:12:58.887 }, 00:12:58.887 { 00:12:58.887 "name": null, 00:12:58.887 "uuid": "055c8681-9694-5e57-8b29-5cd469c6efd9", 00:12:58.887 "is_configured": false, 00:12:58.887 "data_offset": 2048, 00:12:58.887 "data_size": 63488 00:12:58.887 } 00:12:58.887 ] 00:12:58.887 }' 00:12:58.887 07:59:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:58.887 07:59:04 -- common/autotest_common.sh@10 -- # set +x 00:12:59.454 07:59:05 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:12:59.454 07:59:05 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:59.713 [2024-07-13 07:59:05.359290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:59.713 [2024-07-13 07:59:05.359372] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.713 [2024-07-13 07:59:05.359416] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002e580 00:12:59.713 [2024-07-13 07:59:05.359435] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.713 [2024-07-13 07:59:05.359838] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.713 [2024-07-13 07:59:05.359866] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:59.713 [2024-07-13 07:59:05.359929] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:12:59.713 [2024-07-13 07:59:05.359948] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:59.713 pt2 00:12:59.713 07:59:05 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:59.713 [2024-07-13 07:59:05.511353] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:59.713 07:59:05 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:59.713 07:59:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:59.713 07:59:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:59.713 07:59:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:59.713 07:59:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:59.713 07:59:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:59.713 07:59:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:59.713 07:59:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:59.713 07:59:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:59.713 07:59:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:59.972 07:59:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.972 07:59:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.972 07:59:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:59.972 "name": "raid_bdev1", 00:12:59.972 "uuid": "8988b490-e6a4-43dc-9a7c-afb534f6d73e", 00:12:59.972 "strip_size_kb": 0, 00:12:59.972 "state": "configuring", 00:12:59.972 "raid_level": "raid1", 00:12:59.972 "superblock": true, 00:12:59.972 "num_base_bdevs": 3, 00:12:59.972 "num_base_bdevs_discovered": 1, 00:12:59.972 "num_base_bdevs_operational": 3, 00:12:59.972 "base_bdevs_list": [ 00:12:59.972 { 00:12:59.972 "name": "pt1", 00:12:59.972 "uuid": "49e8af80-54e1-5c1f-a68d-b74627307e57", 00:12:59.972 "is_configured": true, 00:12:59.972 "data_offset": 2048, 00:12:59.972 "data_size": 63488 00:12:59.972 }, 00:12:59.972 { 00:12:59.972 "name": null, 00:12:59.972 "uuid": "ce8e99e2-3abb-567f-918b-163da7e64b5d", 00:12:59.972 "is_configured": false, 00:12:59.972 "data_offset": 2048, 00:12:59.972 "data_size": 63488 00:12:59.972 }, 00:12:59.972 { 00:12:59.972 "name": null, 00:12:59.972 "uuid": "055c8681-9694-5e57-8b29-5cd469c6efd9", 00:12:59.972 "is_configured": false, 00:12:59.972 "data_offset": 2048, 00:12:59.972 "data_size": 63488 00:12:59.972 } 00:12:59.972 ] 00:12:59.972 }' 00:12:59.972 07:59:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:59.972 07:59:05 -- common/autotest_common.sh@10 -- # set +x 00:13:00.539 07:59:06 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:13:00.539 07:59:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:00.539 07:59:06 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:00.798 [2024-07-13 07:59:06.423386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:00.798 [2024-07-13 07:59:06.423454] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.798 [2024-07-13 07:59:06.423640] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002fa80 00:13:00.798 [2024-07-13 07:59:06.423667] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.798 [2024-07-13 07:59:06.423928] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.798 [2024-07-13 07:59:06.423953] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:00.798 [2024-07-13 07:59:06.424013] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:00.798 [2024-07-13 07:59:06.424030] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:00.798 pt2 00:13:00.798 07:59:06 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:13:00.798 07:59:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:00.798 07:59:06 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:00.798 [2024-07-13 07:59:06.579428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:00.798 [2024-07-13 07:59:06.579504] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.798 [2024-07-13 07:59:06.579535] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031280 00:13:00.798 [2024-07-13 07:59:06.579559] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.798 [2024-07-13 07:59:06.579775] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.798 [2024-07-13 07:59:06.579804] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:00.798 [2024-07-13 07:59:06.579859] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:13:00.798 [2024-07-13 07:59:06.579877] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:00.798 [2024-07-13 07:59:06.579929] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002df80 00:13:00.798 [2024-07-13 07:59:06.579937] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:00.798 [2024-07-13 07:59:06.579990] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:13:00.798 [2024-07-13 07:59:06.580124] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002df80 00:13:00.798 [2024-07-13 07:59:06.580133] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002df80 00:13:00.798 [2024-07-13 07:59:06.580180] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.798 pt3 00:13:00.798 07:59:06 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:13:00.798 07:59:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:00.798 07:59:06 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:00.798 07:59:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:00.798 07:59:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:00.798 07:59:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:00.798 07:59:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:00.798 07:59:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:00.798 07:59:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:00.798 07:59:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:00.798 07:59:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:00.798 07:59:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:00.798 07:59:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.798 07:59:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.056 07:59:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:01.056 "name": "raid_bdev1", 00:13:01.056 "uuid": "8988b490-e6a4-43dc-9a7c-afb534f6d73e", 00:13:01.056 "strip_size_kb": 0, 00:13:01.056 "state": "online", 00:13:01.056 "raid_level": "raid1", 00:13:01.056 "superblock": true, 00:13:01.056 "num_base_bdevs": 3, 00:13:01.056 "num_base_bdevs_discovered": 3, 00:13:01.056 "num_base_bdevs_operational": 3, 00:13:01.056 "base_bdevs_list": [ 00:13:01.056 { 00:13:01.056 "name": "pt1", 00:13:01.056 "uuid": "49e8af80-54e1-5c1f-a68d-b74627307e57", 00:13:01.056 "is_configured": true, 00:13:01.056 "data_offset": 2048, 00:13:01.056 "data_size": 63488 00:13:01.056 }, 00:13:01.056 { 00:13:01.056 "name": "pt2", 00:13:01.056 "uuid": "ce8e99e2-3abb-567f-918b-163da7e64b5d", 00:13:01.056 "is_configured": true, 00:13:01.056 "data_offset": 2048, 00:13:01.056 "data_size": 63488 00:13:01.056 }, 00:13:01.056 { 00:13:01.056 "name": "pt3", 00:13:01.056 "uuid": "055c8681-9694-5e57-8b29-5cd469c6efd9", 00:13:01.056 "is_configured": true, 00:13:01.056 "data_offset": 2048, 00:13:01.056 "data_size": 63488 00:13:01.056 } 00:13:01.056 ] 00:13:01.056 }' 00:13:01.056 07:59:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:01.056 07:59:06 -- common/autotest_common.sh@10 -- # set +x 00:13:01.623 07:59:07 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:01.623 07:59:07 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:13:01.623 [2024-07-13 07:59:07.367606] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.623 07:59:07 -- bdev/bdev_raid.sh@430 -- # '[' 8988b490-e6a4-43dc-9a7c-afb534f6d73e '!=' 8988b490-e6a4-43dc-9a7c-afb534f6d73e ']' 00:13:01.623 07:59:07 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:13:01.623 07:59:07 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:01.623 07:59:07 -- bdev/bdev_raid.sh@196 -- # return 0 00:13:01.623 07:59:07 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:01.881 [2024-07-13 07:59:07.591551] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:01.881 07:59:07 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:01.881 07:59:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:01.881 07:59:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:01.881 07:59:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:01.881 07:59:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:01.881 07:59:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:01.881 07:59:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:01.881 07:59:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:01.881 07:59:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:01.881 07:59:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:01.881 07:59:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:01.881 07:59:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.140 07:59:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:02.140 "name": "raid_bdev1", 00:13:02.140 "uuid": "8988b490-e6a4-43dc-9a7c-afb534f6d73e", 00:13:02.140 "strip_size_kb": 0, 00:13:02.140 "state": "online", 00:13:02.140 "raid_level": "raid1", 00:13:02.140 "superblock": true, 00:13:02.140 "num_base_bdevs": 3, 00:13:02.140 "num_base_bdevs_discovered": 2, 00:13:02.140 "num_base_bdevs_operational": 2, 00:13:02.140 "base_bdevs_list": [ 00:13:02.140 { 00:13:02.140 "name": null, 00:13:02.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.140 "is_configured": false, 00:13:02.140 "data_offset": 2048, 00:13:02.140 "data_size": 63488 00:13:02.140 }, 00:13:02.140 { 00:13:02.140 "name": "pt2", 00:13:02.140 "uuid": "ce8e99e2-3abb-567f-918b-163da7e64b5d", 00:13:02.140 "is_configured": true, 00:13:02.140 "data_offset": 2048, 00:13:02.140 "data_size": 63488 00:13:02.140 }, 00:13:02.140 { 00:13:02.140 "name": "pt3", 00:13:02.140 "uuid": "055c8681-9694-5e57-8b29-5cd469c6efd9", 00:13:02.140 "is_configured": true, 00:13:02.140 "data_offset": 2048, 00:13:02.140 "data_size": 63488 00:13:02.140 } 00:13:02.140 ] 00:13:02.140 }' 00:13:02.140 07:59:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:02.140 07:59:07 -- common/autotest_common.sh@10 -- # set +x 00:13:02.707 07:59:08 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:02.967 [2024-07-13 07:59:08.539647] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:02.967 [2024-07-13 07:59:08.539682] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:02.967 [2024-07-13 07:59:08.539743] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:02.967 [2024-07-13 07:59:08.539787] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:02.967 [2024-07-13 07:59:08.539796] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002df80 name raid_bdev1, state offline 00:13:02.967 07:59:08 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.967 07:59:08 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:13:02.967 07:59:08 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:13:02.967 07:59:08 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:13:02.967 07:59:08 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:13:02.967 07:59:08 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:13:02.967 07:59:08 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:03.268 07:59:08 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:13:03.268 07:59:08 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:13:03.268 07:59:08 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:03.545 07:59:09 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:13:03.545 07:59:09 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:13:03.545 07:59:09 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:13:03.545 07:59:09 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:13:03.545 07:59:09 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:03.545 [2024-07-13 07:59:09.355806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:03.545 [2024-07-13 07:59:09.355904] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.545 [2024-07-13 07:59:09.355962] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000032780 00:13:03.545 [2024-07-13 07:59:09.355991] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.806 [2024-07-13 07:59:09.358440] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.806 [2024-07-13 07:59:09.358504] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:03.806 [2024-07-13 07:59:09.358587] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:03.806 [2024-07-13 07:59:09.358637] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:03.806 pt2 00:13:03.806 07:59:09 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:03.806 07:59:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:03.806 07:59:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:03.806 07:59:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:03.806 07:59:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:03.806 07:59:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:03.806 07:59:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:03.806 07:59:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:03.806 07:59:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:03.806 07:59:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:03.806 07:59:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.806 07:59:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.806 07:59:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:03.806 "name": "raid_bdev1", 00:13:03.806 "uuid": "8988b490-e6a4-43dc-9a7c-afb534f6d73e", 00:13:03.806 "strip_size_kb": 0, 00:13:03.806 "state": "configuring", 00:13:03.806 "raid_level": "raid1", 00:13:03.806 "superblock": true, 00:13:03.806 "num_base_bdevs": 3, 00:13:03.806 "num_base_bdevs_discovered": 1, 00:13:03.806 "num_base_bdevs_operational": 2, 00:13:03.806 "base_bdevs_list": [ 00:13:03.806 { 00:13:03.806 "name": null, 00:13:03.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.806 "is_configured": false, 00:13:03.806 "data_offset": 2048, 00:13:03.806 "data_size": 63488 00:13:03.806 }, 00:13:03.806 { 00:13:03.806 "name": "pt2", 00:13:03.806 "uuid": "ce8e99e2-3abb-567f-918b-163da7e64b5d", 00:13:03.806 "is_configured": true, 00:13:03.806 "data_offset": 2048, 00:13:03.806 "data_size": 63488 00:13:03.806 }, 00:13:03.806 { 00:13:03.806 "name": null, 00:13:03.806 "uuid": "055c8681-9694-5e57-8b29-5cd469c6efd9", 00:13:03.806 "is_configured": false, 00:13:03.806 "data_offset": 2048, 00:13:03.806 "data_size": 63488 00:13:03.806 } 00:13:03.806 ] 00:13:03.806 }' 00:13:03.806 07:59:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:03.806 07:59:09 -- common/autotest_common.sh@10 -- # set +x 00:13:04.373 07:59:10 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:13:04.373 07:59:10 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:13:04.373 07:59:10 -- bdev/bdev_raid.sh@462 -- # i=2 00:13:04.373 07:59:10 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:04.373 [2024-07-13 07:59:10.151912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:04.373 [2024-07-13 07:59:10.151991] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.373 [2024-07-13 07:59:10.152033] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000034280 00:13:04.373 [2024-07-13 07:59:10.152053] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.373 [2024-07-13 07:59:10.152324] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.373 [2024-07-13 07:59:10.152348] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:04.373 [2024-07-13 07:59:10.152413] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:13:04.373 [2024-07-13 07:59:10.152432] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:04.373 [2024-07-13 07:59:10.152695] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000033c80 00:13:04.373 [2024-07-13 07:59:10.152714] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:04.373 [2024-07-13 07:59:10.152765] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:13:04.373 [2024-07-13 07:59:10.152934] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000033c80 00:13:04.373 [2024-07-13 07:59:10.152944] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000033c80 00:13:04.373 [2024-07-13 07:59:10.152994] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.373 pt3 00:13:04.373 07:59:10 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:04.373 07:59:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:04.373 07:59:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:04.373 07:59:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:04.373 07:59:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:04.373 07:59:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:04.373 07:59:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:04.373 07:59:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:04.373 07:59:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:04.373 07:59:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:04.373 07:59:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.373 07:59:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.633 07:59:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:04.633 "name": "raid_bdev1", 00:13:04.633 "uuid": "8988b490-e6a4-43dc-9a7c-afb534f6d73e", 00:13:04.633 "strip_size_kb": 0, 00:13:04.633 "state": "online", 00:13:04.633 "raid_level": "raid1", 00:13:04.633 "superblock": true, 00:13:04.633 "num_base_bdevs": 3, 00:13:04.633 "num_base_bdevs_discovered": 2, 00:13:04.633 "num_base_bdevs_operational": 2, 00:13:04.633 "base_bdevs_list": [ 00:13:04.633 { 00:13:04.633 "name": null, 00:13:04.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.633 "is_configured": false, 00:13:04.633 "data_offset": 2048, 00:13:04.633 "data_size": 63488 00:13:04.633 }, 00:13:04.633 { 00:13:04.633 "name": "pt2", 00:13:04.633 "uuid": "ce8e99e2-3abb-567f-918b-163da7e64b5d", 00:13:04.633 "is_configured": true, 00:13:04.633 "data_offset": 2048, 00:13:04.633 "data_size": 63488 00:13:04.633 }, 00:13:04.633 { 00:13:04.633 "name": "pt3", 00:13:04.633 "uuid": "055c8681-9694-5e57-8b29-5cd469c6efd9", 00:13:04.633 "is_configured": true, 00:13:04.633 "data_offset": 2048, 00:13:04.633 "data_size": 63488 00:13:04.633 } 00:13:04.633 ] 00:13:04.633 }' 00:13:04.633 07:59:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:04.633 07:59:10 -- common/autotest_common.sh@10 -- # set +x 00:13:05.201 07:59:10 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:13:05.201 07:59:10 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:05.201 [2024-07-13 07:59:10.979982] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.201 [2024-07-13 07:59:10.980014] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.202 [2024-07-13 07:59:10.980066] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.202 [2024-07-13 07:59:10.980105] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.202 [2024-07-13 07:59:10.980114] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000033c80 name raid_bdev1, state offline 00:13:05.202 07:59:10 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:05.202 07:59:10 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:13:05.461 07:59:11 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:13:05.461 07:59:11 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:13:05.461 07:59:11 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:05.721 [2024-07-13 07:59:11.360035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:05.721 [2024-07-13 07:59:11.360109] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.721 [2024-07-13 07:59:11.360149] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000035780 00:13:05.721 [2024-07-13 07:59:11.360168] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.721 [2024-07-13 07:59:11.362292] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.721 [2024-07-13 07:59:11.362360] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:05.721 [2024-07-13 07:59:11.362474] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:13:05.721 [2024-07-13 07:59:11.362533] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:05.721 pt1 00:13:05.721 07:59:11 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:05.721 07:59:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:05.721 07:59:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:05.721 07:59:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:05.721 07:59:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:05.721 07:59:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:05.721 07:59:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:05.721 07:59:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:05.721 07:59:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:05.721 07:59:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:05.721 07:59:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.721 07:59:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:05.980 07:59:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:05.980 "name": "raid_bdev1", 00:13:05.980 "uuid": "8988b490-e6a4-43dc-9a7c-afb534f6d73e", 00:13:05.980 "strip_size_kb": 0, 00:13:05.980 "state": "configuring", 00:13:05.980 "raid_level": "raid1", 00:13:05.980 "superblock": true, 00:13:05.980 "num_base_bdevs": 3, 00:13:05.980 "num_base_bdevs_discovered": 1, 00:13:05.980 "num_base_bdevs_operational": 3, 00:13:05.980 "base_bdevs_list": [ 00:13:05.980 { 00:13:05.980 "name": "pt1", 00:13:05.980 "uuid": "49e8af80-54e1-5c1f-a68d-b74627307e57", 00:13:05.980 "is_configured": true, 00:13:05.980 "data_offset": 2048, 00:13:05.980 "data_size": 63488 00:13:05.980 }, 00:13:05.980 { 00:13:05.980 "name": null, 00:13:05.980 "uuid": "ce8e99e2-3abb-567f-918b-163da7e64b5d", 00:13:05.980 "is_configured": false, 00:13:05.980 "data_offset": 2048, 00:13:05.980 "data_size": 63488 00:13:05.980 }, 00:13:05.980 { 00:13:05.980 "name": null, 00:13:05.980 "uuid": "055c8681-9694-5e57-8b29-5cd469c6efd9", 00:13:05.980 "is_configured": false, 00:13:05.980 "data_offset": 2048, 00:13:05.980 "data_size": 63488 00:13:05.980 } 00:13:05.980 ] 00:13:05.980 }' 00:13:05.980 07:59:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:05.980 07:59:11 -- common/autotest_common.sh@10 -- # set +x 00:13:06.548 07:59:12 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:13:06.548 07:59:12 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:13:06.548 07:59:12 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:06.548 07:59:12 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:13:06.548 07:59:12 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:13:06.548 07:59:12 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:06.807 07:59:12 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:13:06.807 07:59:12 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:13:06.807 07:59:12 -- bdev/bdev_raid.sh@489 -- # i=2 00:13:06.807 07:59:12 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:06.807 [2024-07-13 07:59:12.576117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:06.807 [2024-07-13 07:59:12.576188] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.807 [2024-07-13 07:59:12.576238] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000037280 00:13:06.807 [2024-07-13 07:59:12.576274] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.807 [2024-07-13 07:59:12.576728] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.807 [2024-07-13 07:59:12.576773] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:06.807 [2024-07-13 07:59:12.576836] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:13:06.807 [2024-07-13 07:59:12.576848] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:06.807 [2024-07-13 07:59:12.576856] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:06.807 [2024-07-13 07:59:12.576874] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000036c80 name raid_bdev1, state configuring 00:13:06.808 [2024-07-13 07:59:12.576905] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:06.808 pt3 00:13:06.808 07:59:12 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:06.808 07:59:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:06.808 07:59:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:06.808 07:59:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:06.808 07:59:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:06.808 07:59:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:06.808 07:59:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:06.808 07:59:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:06.808 07:59:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:06.808 07:59:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:06.808 07:59:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:06.808 07:59:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.067 07:59:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:07.067 "name": "raid_bdev1", 00:13:07.067 "uuid": "8988b490-e6a4-43dc-9a7c-afb534f6d73e", 00:13:07.067 "strip_size_kb": 0, 00:13:07.067 "state": "configuring", 00:13:07.067 "raid_level": "raid1", 00:13:07.067 "superblock": true, 00:13:07.067 "num_base_bdevs": 3, 00:13:07.067 "num_base_bdevs_discovered": 1, 00:13:07.067 "num_base_bdevs_operational": 2, 00:13:07.067 "base_bdevs_list": [ 00:13:07.067 { 00:13:07.067 "name": null, 00:13:07.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.067 "is_configured": false, 00:13:07.067 "data_offset": 2048, 00:13:07.067 "data_size": 63488 00:13:07.067 }, 00:13:07.067 { 00:13:07.067 "name": null, 00:13:07.067 "uuid": "ce8e99e2-3abb-567f-918b-163da7e64b5d", 00:13:07.067 "is_configured": false, 00:13:07.067 "data_offset": 2048, 00:13:07.067 "data_size": 63488 00:13:07.067 }, 00:13:07.067 { 00:13:07.067 "name": "pt3", 00:13:07.067 "uuid": "055c8681-9694-5e57-8b29-5cd469c6efd9", 00:13:07.067 "is_configured": true, 00:13:07.067 "data_offset": 2048, 00:13:07.067 "data_size": 63488 00:13:07.067 } 00:13:07.067 ] 00:13:07.067 }' 00:13:07.067 07:59:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:07.067 07:59:12 -- common/autotest_common.sh@10 -- # set +x 00:13:07.637 07:59:13 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:13:07.637 07:59:13 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:13:07.637 07:59:13 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:07.895 [2024-07-13 07:59:13.560238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:07.895 [2024-07-13 07:59:13.560314] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.895 [2024-07-13 07:59:13.560347] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000038a80 00:13:07.895 [2024-07-13 07:59:13.560378] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.895 [2024-07-13 07:59:13.560825] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.895 [2024-07-13 07:59:13.560863] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:07.895 [2024-07-13 07:59:13.560916] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:07.895 [2024-07-13 07:59:13.560940] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:07.895 [2024-07-13 07:59:13.560998] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000038480 00:13:07.895 [2024-07-13 07:59:13.561006] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:07.895 [2024-07-13 07:59:13.561050] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:13:07.895 [2024-07-13 07:59:13.561200] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000038480 00:13:07.895 [2024-07-13 07:59:13.561210] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000038480 00:13:07.895 [2024-07-13 07:59:13.561256] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.895 pt2 00:13:07.895 07:59:13 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:13:07.895 07:59:13 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:13:07.895 07:59:13 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:07.895 07:59:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:07.895 07:59:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:07.895 07:59:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:07.895 07:59:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:07.895 07:59:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:07.895 07:59:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:07.895 07:59:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:07.895 07:59:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:07.895 07:59:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:07.895 07:59:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.895 07:59:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.153 07:59:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:08.153 "name": "raid_bdev1", 00:13:08.153 "uuid": "8988b490-e6a4-43dc-9a7c-afb534f6d73e", 00:13:08.153 "strip_size_kb": 0, 00:13:08.153 "state": "online", 00:13:08.153 "raid_level": "raid1", 00:13:08.153 "superblock": true, 00:13:08.153 "num_base_bdevs": 3, 00:13:08.153 "num_base_bdevs_discovered": 2, 00:13:08.153 "num_base_bdevs_operational": 2, 00:13:08.153 "base_bdevs_list": [ 00:13:08.153 { 00:13:08.153 "name": null, 00:13:08.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.153 "is_configured": false, 00:13:08.153 "data_offset": 2048, 00:13:08.153 "data_size": 63488 00:13:08.153 }, 00:13:08.153 { 00:13:08.153 "name": "pt2", 00:13:08.153 "uuid": "ce8e99e2-3abb-567f-918b-163da7e64b5d", 00:13:08.153 "is_configured": true, 00:13:08.153 "data_offset": 2048, 00:13:08.153 "data_size": 63488 00:13:08.153 }, 00:13:08.153 { 00:13:08.153 "name": "pt3", 00:13:08.153 "uuid": "055c8681-9694-5e57-8b29-5cd469c6efd9", 00:13:08.153 "is_configured": true, 00:13:08.153 "data_offset": 2048, 00:13:08.153 "data_size": 63488 00:13:08.153 } 00:13:08.153 ] 00:13:08.153 }' 00:13:08.153 07:59:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:08.153 07:59:13 -- common/autotest_common.sh@10 -- # set +x 00:13:08.720 07:59:14 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:08.720 07:59:14 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:13:08.720 [2024-07-13 07:59:14.476483] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:08.720 07:59:14 -- bdev/bdev_raid.sh@506 -- # '[' 8988b490-e6a4-43dc-9a7c-afb534f6d73e '!=' 8988b490-e6a4-43dc-9a7c-afb534f6d73e ']' 00:13:08.720 07:59:14 -- bdev/bdev_raid.sh@511 -- # killprocess 62857 00:13:08.720 07:59:14 -- common/autotest_common.sh@926 -- # '[' -z 62857 ']' 00:13:08.720 07:59:14 -- common/autotest_common.sh@930 -- # kill -0 62857 00:13:08.720 07:59:14 -- common/autotest_common.sh@931 -- # uname 00:13:08.720 07:59:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:08.720 07:59:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62857 00:13:08.720 07:59:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:08.720 07:59:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:08.720 killing process with pid 62857 00:13:08.720 07:59:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62857' 00:13:08.720 07:59:14 -- common/autotest_common.sh@945 -- # kill 62857 00:13:08.720 [2024-07-13 07:59:14.516651] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:08.720 07:59:14 -- common/autotest_common.sh@950 -- # wait 62857 00:13:08.720 [2024-07-13 07:59:14.516705] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.720 [2024-07-13 07:59:14.516741] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:08.720 [2024-07-13 07:59:14.516750] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000038480 name raid_bdev1, state offline 00:13:08.978 [2024-07-13 07:59:14.546333] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:08.978 07:59:14 -- bdev/bdev_raid.sh@513 -- # return 0 00:13:08.978 ************************************ 00:13:08.978 END TEST raid_superblock_test 00:13:08.978 ************************************ 00:13:08.978 00:13:08.978 real 0m14.881s 00:13:08.978 user 0m27.906s 00:13:08.978 sys 0m2.022s 00:13:08.978 07:59:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.978 07:59:14 -- common/autotest_common.sh@10 -- # set +x 00:13:08.978 07:59:14 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:13:08.978 07:59:14 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:13:08.978 07:59:14 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:13:08.978 07:59:14 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:08.978 07:59:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:08.978 07:59:14 -- common/autotest_common.sh@10 -- # set +x 00:13:08.978 ************************************ 00:13:08.978 START TEST raid_state_function_test 00:13:08.978 ************************************ 00:13:08.978 07:59:14 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 false 00:13:08.978 07:59:14 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:13:08.978 07:59:14 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:13:08.978 07:59:14 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:13:08.978 07:59:14 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:08.978 07:59:14 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:09.238 Process raid pid: 63415 00:13:09.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@226 -- # raid_pid=63415 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 63415' 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@228 -- # waitforlisten 63415 /var/tmp/spdk-raid.sock 00:13:09.238 07:59:14 -- common/autotest_common.sh@819 -- # '[' -z 63415 ']' 00:13:09.238 07:59:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:09.238 07:59:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:09.238 07:59:14 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:09.238 07:59:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:09.238 07:59:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:09.238 07:59:14 -- common/autotest_common.sh@10 -- # set +x 00:13:09.238 [2024-07-13 07:59:14.929792] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:09.238 [2024-07-13 07:59:14.930033] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.496 [2024-07-13 07:59:15.085323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.496 [2024-07-13 07:59:15.134305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.496 [2024-07-13 07:59:15.183701] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.063 07:59:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:10.063 07:59:15 -- common/autotest_common.sh@852 -- # return 0 00:13:10.063 07:59:15 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:10.322 [2024-07-13 07:59:15.908852] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:10.322 [2024-07-13 07:59:15.908908] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:10.322 [2024-07-13 07:59:15.908918] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:10.322 [2024-07-13 07:59:15.908953] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:10.322 [2024-07-13 07:59:15.908960] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:10.322 [2024-07-13 07:59:15.908993] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:10.322 [2024-07-13 07:59:15.909001] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:10.322 [2024-07-13 07:59:15.909022] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:10.322 07:59:15 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:10.322 07:59:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:10.322 07:59:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:10.322 07:59:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:10.322 07:59:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:10.322 07:59:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:10.322 07:59:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:10.322 07:59:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:10.322 07:59:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:10.322 07:59:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:10.322 07:59:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.322 07:59:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.322 07:59:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:10.322 "name": "Existed_Raid", 00:13:10.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.322 "strip_size_kb": 64, 00:13:10.322 "state": "configuring", 00:13:10.322 "raid_level": "raid0", 00:13:10.322 "superblock": false, 00:13:10.322 "num_base_bdevs": 4, 00:13:10.322 "num_base_bdevs_discovered": 0, 00:13:10.322 "num_base_bdevs_operational": 4, 00:13:10.322 "base_bdevs_list": [ 00:13:10.322 { 00:13:10.322 "name": "BaseBdev1", 00:13:10.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.322 "is_configured": false, 00:13:10.322 "data_offset": 0, 00:13:10.322 "data_size": 0 00:13:10.322 }, 00:13:10.322 { 00:13:10.322 "name": "BaseBdev2", 00:13:10.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.322 "is_configured": false, 00:13:10.322 "data_offset": 0, 00:13:10.322 "data_size": 0 00:13:10.322 }, 00:13:10.322 { 00:13:10.322 "name": "BaseBdev3", 00:13:10.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.322 "is_configured": false, 00:13:10.322 "data_offset": 0, 00:13:10.322 "data_size": 0 00:13:10.322 }, 00:13:10.322 { 00:13:10.322 "name": "BaseBdev4", 00:13:10.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.322 "is_configured": false, 00:13:10.322 "data_offset": 0, 00:13:10.322 "data_size": 0 00:13:10.322 } 00:13:10.322 ] 00:13:10.322 }' 00:13:10.322 07:59:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:10.322 07:59:16 -- common/autotest_common.sh@10 -- # set +x 00:13:10.891 07:59:16 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:11.150 [2024-07-13 07:59:16.804876] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:11.150 [2024-07-13 07:59:16.804912] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000025880 name Existed_Raid, state configuring 00:13:11.150 07:59:16 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:11.150 [2024-07-13 07:59:16.960933] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:11.150 [2024-07-13 07:59:16.960987] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:11.150 [2024-07-13 07:59:16.960997] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:11.150 [2024-07-13 07:59:16.961019] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:11.150 [2024-07-13 07:59:16.961027] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:11.150 [2024-07-13 07:59:16.961050] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:11.150 [2024-07-13 07:59:16.961058] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:11.150 [2024-07-13 07:59:16.961080] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:11.409 07:59:16 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:11.409 BaseBdev1 00:13:11.409 [2024-07-13 07:59:17.115596] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:11.409 07:59:17 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:11.409 07:59:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:11.409 07:59:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:11.409 07:59:17 -- common/autotest_common.sh@889 -- # local i 00:13:11.409 07:59:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:11.409 07:59:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:11.409 07:59:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:11.668 07:59:17 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:11.668 [ 00:13:11.668 { 00:13:11.668 "name": "BaseBdev1", 00:13:11.668 "aliases": [ 00:13:11.668 "5813807a-fd35-4328-84e0-b25f27193946" 00:13:11.668 ], 00:13:11.668 "product_name": "Malloc disk", 00:13:11.668 "block_size": 512, 00:13:11.668 "num_blocks": 65536, 00:13:11.668 "uuid": "5813807a-fd35-4328-84e0-b25f27193946", 00:13:11.668 "assigned_rate_limits": { 00:13:11.668 "rw_ios_per_sec": 0, 00:13:11.668 "rw_mbytes_per_sec": 0, 00:13:11.668 "r_mbytes_per_sec": 0, 00:13:11.668 "w_mbytes_per_sec": 0 00:13:11.668 }, 00:13:11.668 "claimed": true, 00:13:11.668 "claim_type": "exclusive_write", 00:13:11.668 "zoned": false, 00:13:11.668 "supported_io_types": { 00:13:11.668 "read": true, 00:13:11.668 "write": true, 00:13:11.668 "unmap": true, 00:13:11.668 "write_zeroes": true, 00:13:11.668 "flush": true, 00:13:11.668 "reset": true, 00:13:11.668 "compare": false, 00:13:11.668 "compare_and_write": false, 00:13:11.668 "abort": true, 00:13:11.668 "nvme_admin": false, 00:13:11.668 "nvme_io": false 00:13:11.668 }, 00:13:11.668 "memory_domains": [ 00:13:11.668 { 00:13:11.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.668 "dma_device_type": 2 00:13:11.668 } 00:13:11.669 ], 00:13:11.669 "driver_specific": {} 00:13:11.669 } 00:13:11.669 ] 00:13:11.669 07:59:17 -- common/autotest_common.sh@895 -- # return 0 00:13:11.669 07:59:17 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:11.669 07:59:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:11.669 07:59:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:11.669 07:59:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:11.669 07:59:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:11.669 07:59:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:11.669 07:59:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:11.669 07:59:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:11.669 07:59:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:11.669 07:59:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:11.669 07:59:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.669 07:59:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:11.928 07:59:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:11.928 "name": "Existed_Raid", 00:13:11.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.928 "strip_size_kb": 64, 00:13:11.928 "state": "configuring", 00:13:11.928 "raid_level": "raid0", 00:13:11.928 "superblock": false, 00:13:11.928 "num_base_bdevs": 4, 00:13:11.928 "num_base_bdevs_discovered": 1, 00:13:11.928 "num_base_bdevs_operational": 4, 00:13:11.928 "base_bdevs_list": [ 00:13:11.928 { 00:13:11.928 "name": "BaseBdev1", 00:13:11.928 "uuid": "5813807a-fd35-4328-84e0-b25f27193946", 00:13:11.928 "is_configured": true, 00:13:11.928 "data_offset": 0, 00:13:11.928 "data_size": 65536 00:13:11.928 }, 00:13:11.928 { 00:13:11.928 "name": "BaseBdev2", 00:13:11.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.928 "is_configured": false, 00:13:11.928 "data_offset": 0, 00:13:11.928 "data_size": 0 00:13:11.928 }, 00:13:11.928 { 00:13:11.928 "name": "BaseBdev3", 00:13:11.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.928 "is_configured": false, 00:13:11.928 "data_offset": 0, 00:13:11.928 "data_size": 0 00:13:11.928 }, 00:13:11.928 { 00:13:11.928 "name": "BaseBdev4", 00:13:11.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.928 "is_configured": false, 00:13:11.928 "data_offset": 0, 00:13:11.928 "data_size": 0 00:13:11.928 } 00:13:11.928 ] 00:13:11.928 }' 00:13:11.928 07:59:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:11.928 07:59:17 -- common/autotest_common.sh@10 -- # set +x 00:13:12.494 07:59:18 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:12.494 [2024-07-13 07:59:18.231707] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:12.494 [2024-07-13 07:59:18.231740] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:13:12.494 07:59:18 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:13:12.494 07:59:18 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:12.753 [2024-07-13 07:59:18.447925] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:12.753 [2024-07-13 07:59:18.453779] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:12.753 [2024-07-13 07:59:18.453891] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:12.753 [2024-07-13 07:59:18.453908] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:12.753 [2024-07-13 07:59:18.453946] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:12.753 [2024-07-13 07:59:18.453959] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:12.753 [2024-07-13 07:59:18.454001] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:12.753 07:59:18 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:12.753 07:59:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:12.753 07:59:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:12.753 07:59:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:12.753 07:59:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:12.753 07:59:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:12.753 07:59:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:12.753 07:59:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:12.753 07:59:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:12.753 07:59:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:12.753 07:59:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:12.753 07:59:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:12.753 07:59:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.753 07:59:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.013 07:59:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:13.013 "name": "Existed_Raid", 00:13:13.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.013 "strip_size_kb": 64, 00:13:13.013 "state": "configuring", 00:13:13.013 "raid_level": "raid0", 00:13:13.013 "superblock": false, 00:13:13.013 "num_base_bdevs": 4, 00:13:13.013 "num_base_bdevs_discovered": 1, 00:13:13.013 "num_base_bdevs_operational": 4, 00:13:13.013 "base_bdevs_list": [ 00:13:13.013 { 00:13:13.013 "name": "BaseBdev1", 00:13:13.013 "uuid": "5813807a-fd35-4328-84e0-b25f27193946", 00:13:13.013 "is_configured": true, 00:13:13.013 "data_offset": 0, 00:13:13.013 "data_size": 65536 00:13:13.013 }, 00:13:13.013 { 00:13:13.013 "name": "BaseBdev2", 00:13:13.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.013 "is_configured": false, 00:13:13.013 "data_offset": 0, 00:13:13.013 "data_size": 0 00:13:13.013 }, 00:13:13.013 { 00:13:13.013 "name": "BaseBdev3", 00:13:13.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.013 "is_configured": false, 00:13:13.013 "data_offset": 0, 00:13:13.013 "data_size": 0 00:13:13.013 }, 00:13:13.013 { 00:13:13.013 "name": "BaseBdev4", 00:13:13.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.013 "is_configured": false, 00:13:13.013 "data_offset": 0, 00:13:13.013 "data_size": 0 00:13:13.013 } 00:13:13.013 ] 00:13:13.013 }' 00:13:13.013 07:59:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:13.013 07:59:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.272 07:59:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:13.530 [2024-07-13 07:59:19.225290] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:13.530 BaseBdev2 00:13:13.530 07:59:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:13.530 07:59:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:13:13.530 07:59:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:13.530 07:59:19 -- common/autotest_common.sh@889 -- # local i 00:13:13.530 07:59:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:13.530 07:59:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:13.530 07:59:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:13.789 07:59:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:13.789 [ 00:13:13.789 { 00:13:13.789 "name": "BaseBdev2", 00:13:13.789 "aliases": [ 00:13:13.789 "2e47e1e6-d619-4848-80a7-e49922f11c1c" 00:13:13.789 ], 00:13:13.789 "product_name": "Malloc disk", 00:13:13.789 "block_size": 512, 00:13:13.789 "num_blocks": 65536, 00:13:13.789 "uuid": "2e47e1e6-d619-4848-80a7-e49922f11c1c", 00:13:13.789 "assigned_rate_limits": { 00:13:13.789 "rw_ios_per_sec": 0, 00:13:13.789 "rw_mbytes_per_sec": 0, 00:13:13.789 "r_mbytes_per_sec": 0, 00:13:13.789 "w_mbytes_per_sec": 0 00:13:13.789 }, 00:13:13.789 "claimed": true, 00:13:13.789 "claim_type": "exclusive_write", 00:13:13.789 "zoned": false, 00:13:13.789 "supported_io_types": { 00:13:13.789 "read": true, 00:13:13.789 "write": true, 00:13:13.789 "unmap": true, 00:13:13.789 "write_zeroes": true, 00:13:13.789 "flush": true, 00:13:13.789 "reset": true, 00:13:13.789 "compare": false, 00:13:13.789 "compare_and_write": false, 00:13:13.789 "abort": true, 00:13:13.789 "nvme_admin": false, 00:13:13.789 "nvme_io": false 00:13:13.789 }, 00:13:13.789 "memory_domains": [ 00:13:13.789 { 00:13:13.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.789 "dma_device_type": 2 00:13:13.789 } 00:13:13.789 ], 00:13:13.789 "driver_specific": {} 00:13:13.789 } 00:13:13.789 ] 00:13:13.789 07:59:19 -- common/autotest_common.sh@895 -- # return 0 00:13:13.789 07:59:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:13.789 07:59:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:13.789 07:59:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:13.789 07:59:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:13.789 07:59:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:13.789 07:59:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:13.789 07:59:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:13.789 07:59:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:13.789 07:59:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:13.789 07:59:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:13.789 07:59:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:13.789 07:59:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:13.789 07:59:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.789 07:59:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:14.047 07:59:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:14.047 "name": "Existed_Raid", 00:13:14.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.047 "strip_size_kb": 64, 00:13:14.047 "state": "configuring", 00:13:14.047 "raid_level": "raid0", 00:13:14.047 "superblock": false, 00:13:14.047 "num_base_bdevs": 4, 00:13:14.047 "num_base_bdevs_discovered": 2, 00:13:14.047 "num_base_bdevs_operational": 4, 00:13:14.047 "base_bdevs_list": [ 00:13:14.047 { 00:13:14.047 "name": "BaseBdev1", 00:13:14.047 "uuid": "5813807a-fd35-4328-84e0-b25f27193946", 00:13:14.047 "is_configured": true, 00:13:14.047 "data_offset": 0, 00:13:14.047 "data_size": 65536 00:13:14.047 }, 00:13:14.047 { 00:13:14.047 "name": "BaseBdev2", 00:13:14.047 "uuid": "2e47e1e6-d619-4848-80a7-e49922f11c1c", 00:13:14.047 "is_configured": true, 00:13:14.047 "data_offset": 0, 00:13:14.047 "data_size": 65536 00:13:14.047 }, 00:13:14.047 { 00:13:14.047 "name": "BaseBdev3", 00:13:14.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.048 "is_configured": false, 00:13:14.048 "data_offset": 0, 00:13:14.048 "data_size": 0 00:13:14.048 }, 00:13:14.048 { 00:13:14.048 "name": "BaseBdev4", 00:13:14.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.048 "is_configured": false, 00:13:14.048 "data_offset": 0, 00:13:14.048 "data_size": 0 00:13:14.048 } 00:13:14.048 ] 00:13:14.048 }' 00:13:14.048 07:59:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:14.048 07:59:19 -- common/autotest_common.sh@10 -- # set +x 00:13:14.614 07:59:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:14.872 [2024-07-13 07:59:20.501051] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:14.872 BaseBdev3 00:13:14.872 07:59:20 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:13:14.872 07:59:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:13:14.872 07:59:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:14.872 07:59:20 -- common/autotest_common.sh@889 -- # local i 00:13:14.872 07:59:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:14.872 07:59:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:14.872 07:59:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:14.872 07:59:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:15.130 [ 00:13:15.130 { 00:13:15.130 "name": "BaseBdev3", 00:13:15.130 "aliases": [ 00:13:15.130 "ff64fd3b-a6c9-49c4-ac14-6cb3fb2f711b" 00:13:15.130 ], 00:13:15.130 "product_name": "Malloc disk", 00:13:15.130 "block_size": 512, 00:13:15.130 "num_blocks": 65536, 00:13:15.130 "uuid": "ff64fd3b-a6c9-49c4-ac14-6cb3fb2f711b", 00:13:15.130 "assigned_rate_limits": { 00:13:15.130 "rw_ios_per_sec": 0, 00:13:15.130 "rw_mbytes_per_sec": 0, 00:13:15.130 "r_mbytes_per_sec": 0, 00:13:15.130 "w_mbytes_per_sec": 0 00:13:15.130 }, 00:13:15.130 "claimed": true, 00:13:15.130 "claim_type": "exclusive_write", 00:13:15.130 "zoned": false, 00:13:15.130 "supported_io_types": { 00:13:15.130 "read": true, 00:13:15.130 "write": true, 00:13:15.130 "unmap": true, 00:13:15.130 "write_zeroes": true, 00:13:15.130 "flush": true, 00:13:15.130 "reset": true, 00:13:15.130 "compare": false, 00:13:15.130 "compare_and_write": false, 00:13:15.130 "abort": true, 00:13:15.130 "nvme_admin": false, 00:13:15.130 "nvme_io": false 00:13:15.130 }, 00:13:15.130 "memory_domains": [ 00:13:15.130 { 00:13:15.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.130 "dma_device_type": 2 00:13:15.130 } 00:13:15.130 ], 00:13:15.130 "driver_specific": {} 00:13:15.130 } 00:13:15.130 ] 00:13:15.130 07:59:20 -- common/autotest_common.sh@895 -- # return 0 00:13:15.130 07:59:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:15.130 07:59:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:15.130 07:59:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:15.130 07:59:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:15.130 07:59:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:15.130 07:59:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:15.130 07:59:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:15.130 07:59:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:15.130 07:59:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:15.130 07:59:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:15.130 07:59:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:15.130 07:59:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:15.130 07:59:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.130 07:59:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.388 07:59:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:15.388 "name": "Existed_Raid", 00:13:15.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.388 "strip_size_kb": 64, 00:13:15.388 "state": "configuring", 00:13:15.388 "raid_level": "raid0", 00:13:15.388 "superblock": false, 00:13:15.388 "num_base_bdevs": 4, 00:13:15.388 "num_base_bdevs_discovered": 3, 00:13:15.388 "num_base_bdevs_operational": 4, 00:13:15.388 "base_bdevs_list": [ 00:13:15.388 { 00:13:15.388 "name": "BaseBdev1", 00:13:15.388 "uuid": "5813807a-fd35-4328-84e0-b25f27193946", 00:13:15.388 "is_configured": true, 00:13:15.388 "data_offset": 0, 00:13:15.388 "data_size": 65536 00:13:15.388 }, 00:13:15.388 { 00:13:15.388 "name": "BaseBdev2", 00:13:15.388 "uuid": "2e47e1e6-d619-4848-80a7-e49922f11c1c", 00:13:15.388 "is_configured": true, 00:13:15.388 "data_offset": 0, 00:13:15.388 "data_size": 65536 00:13:15.388 }, 00:13:15.388 { 00:13:15.388 "name": "BaseBdev3", 00:13:15.388 "uuid": "ff64fd3b-a6c9-49c4-ac14-6cb3fb2f711b", 00:13:15.388 "is_configured": true, 00:13:15.388 "data_offset": 0, 00:13:15.388 "data_size": 65536 00:13:15.388 }, 00:13:15.388 { 00:13:15.388 "name": "BaseBdev4", 00:13:15.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.389 "is_configured": false, 00:13:15.389 "data_offset": 0, 00:13:15.389 "data_size": 0 00:13:15.389 } 00:13:15.389 ] 00:13:15.389 }' 00:13:15.389 07:59:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:15.389 07:59:21 -- common/autotest_common.sh@10 -- # set +x 00:13:15.954 07:59:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:15.954 [2024-07-13 07:59:21.668750] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:15.954 [2024-07-13 07:59:21.668789] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000027c80 00:13:15.954 [2024-07-13 07:59:21.668798] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:15.954 [2024-07-13 07:59:21.668902] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:13:15.954 [2024-07-13 07:59:21.669089] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000027c80 00:13:15.954 [2024-07-13 07:59:21.669099] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000027c80 00:13:15.954 [2024-07-13 07:59:21.669241] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.954 BaseBdev4 00:13:15.954 07:59:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:13:15.954 07:59:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:13:15.954 07:59:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:15.954 07:59:21 -- common/autotest_common.sh@889 -- # local i 00:13:15.954 07:59:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:15.954 07:59:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:15.954 07:59:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:16.212 07:59:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:16.212 [ 00:13:16.212 { 00:13:16.212 "name": "BaseBdev4", 00:13:16.212 "aliases": [ 00:13:16.212 "dd7fc5ea-aa74-42b3-a99d-e8c50ce649bc" 00:13:16.212 ], 00:13:16.212 "product_name": "Malloc disk", 00:13:16.212 "block_size": 512, 00:13:16.212 "num_blocks": 65536, 00:13:16.212 "uuid": "dd7fc5ea-aa74-42b3-a99d-e8c50ce649bc", 00:13:16.212 "assigned_rate_limits": { 00:13:16.212 "rw_ios_per_sec": 0, 00:13:16.212 "rw_mbytes_per_sec": 0, 00:13:16.212 "r_mbytes_per_sec": 0, 00:13:16.212 "w_mbytes_per_sec": 0 00:13:16.212 }, 00:13:16.212 "claimed": true, 00:13:16.212 "claim_type": "exclusive_write", 00:13:16.212 "zoned": false, 00:13:16.212 "supported_io_types": { 00:13:16.212 "read": true, 00:13:16.212 "write": true, 00:13:16.212 "unmap": true, 00:13:16.212 "write_zeroes": true, 00:13:16.212 "flush": true, 00:13:16.212 "reset": true, 00:13:16.212 "compare": false, 00:13:16.212 "compare_and_write": false, 00:13:16.212 "abort": true, 00:13:16.212 "nvme_admin": false, 00:13:16.212 "nvme_io": false 00:13:16.212 }, 00:13:16.212 "memory_domains": [ 00:13:16.212 { 00:13:16.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.212 "dma_device_type": 2 00:13:16.212 } 00:13:16.212 ], 00:13:16.212 "driver_specific": {} 00:13:16.212 } 00:13:16.212 ] 00:13:16.212 07:59:21 -- common/autotest_common.sh@895 -- # return 0 00:13:16.212 07:59:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:16.212 07:59:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:16.212 07:59:21 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:16.212 07:59:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:16.212 07:59:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:16.212 07:59:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:16.212 07:59:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:16.212 07:59:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:16.212 07:59:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:16.212 07:59:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:16.212 07:59:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:16.212 07:59:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:16.212 07:59:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:16.212 07:59:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.469 07:59:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:16.469 "name": "Existed_Raid", 00:13:16.469 "uuid": "6e4f576c-f303-4917-918a-2402f1353f5f", 00:13:16.469 "strip_size_kb": 64, 00:13:16.469 "state": "online", 00:13:16.469 "raid_level": "raid0", 00:13:16.469 "superblock": false, 00:13:16.469 "num_base_bdevs": 4, 00:13:16.469 "num_base_bdevs_discovered": 4, 00:13:16.469 "num_base_bdevs_operational": 4, 00:13:16.469 "base_bdevs_list": [ 00:13:16.469 { 00:13:16.469 "name": "BaseBdev1", 00:13:16.469 "uuid": "5813807a-fd35-4328-84e0-b25f27193946", 00:13:16.469 "is_configured": true, 00:13:16.469 "data_offset": 0, 00:13:16.469 "data_size": 65536 00:13:16.469 }, 00:13:16.469 { 00:13:16.469 "name": "BaseBdev2", 00:13:16.469 "uuid": "2e47e1e6-d619-4848-80a7-e49922f11c1c", 00:13:16.469 "is_configured": true, 00:13:16.469 "data_offset": 0, 00:13:16.469 "data_size": 65536 00:13:16.469 }, 00:13:16.469 { 00:13:16.469 "name": "BaseBdev3", 00:13:16.469 "uuid": "ff64fd3b-a6c9-49c4-ac14-6cb3fb2f711b", 00:13:16.469 "is_configured": true, 00:13:16.469 "data_offset": 0, 00:13:16.469 "data_size": 65536 00:13:16.469 }, 00:13:16.469 { 00:13:16.469 "name": "BaseBdev4", 00:13:16.469 "uuid": "dd7fc5ea-aa74-42b3-a99d-e8c50ce649bc", 00:13:16.469 "is_configured": true, 00:13:16.469 "data_offset": 0, 00:13:16.469 "data_size": 65536 00:13:16.469 } 00:13:16.469 ] 00:13:16.469 }' 00:13:16.469 07:59:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:16.469 07:59:22 -- common/autotest_common.sh@10 -- # set +x 00:13:17.034 07:59:22 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:17.034 [2024-07-13 07:59:22.804943] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:17.034 [2024-07-13 07:59:22.804970] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:17.034 [2024-07-13 07:59:22.805012] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:17.034 07:59:22 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:17.034 07:59:22 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:13:17.034 07:59:22 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:17.035 07:59:22 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:17.035 07:59:22 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:13:17.035 07:59:22 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:17.035 07:59:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:17.035 07:59:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:17.035 07:59:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:17.035 07:59:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:17.035 07:59:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:17.035 07:59:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:17.035 07:59:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:17.035 07:59:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:17.035 07:59:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:17.035 07:59:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.035 07:59:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.291 07:59:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:17.291 "name": "Existed_Raid", 00:13:17.291 "uuid": "6e4f576c-f303-4917-918a-2402f1353f5f", 00:13:17.291 "strip_size_kb": 64, 00:13:17.291 "state": "offline", 00:13:17.291 "raid_level": "raid0", 00:13:17.291 "superblock": false, 00:13:17.291 "num_base_bdevs": 4, 00:13:17.291 "num_base_bdevs_discovered": 3, 00:13:17.291 "num_base_bdevs_operational": 3, 00:13:17.291 "base_bdevs_list": [ 00:13:17.291 { 00:13:17.291 "name": null, 00:13:17.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.291 "is_configured": false, 00:13:17.291 "data_offset": 0, 00:13:17.291 "data_size": 65536 00:13:17.291 }, 00:13:17.291 { 00:13:17.291 "name": "BaseBdev2", 00:13:17.291 "uuid": "2e47e1e6-d619-4848-80a7-e49922f11c1c", 00:13:17.291 "is_configured": true, 00:13:17.291 "data_offset": 0, 00:13:17.291 "data_size": 65536 00:13:17.291 }, 00:13:17.291 { 00:13:17.291 "name": "BaseBdev3", 00:13:17.291 "uuid": "ff64fd3b-a6c9-49c4-ac14-6cb3fb2f711b", 00:13:17.291 "is_configured": true, 00:13:17.291 "data_offset": 0, 00:13:17.291 "data_size": 65536 00:13:17.291 }, 00:13:17.291 { 00:13:17.291 "name": "BaseBdev4", 00:13:17.291 "uuid": "dd7fc5ea-aa74-42b3-a99d-e8c50ce649bc", 00:13:17.291 "is_configured": true, 00:13:17.291 "data_offset": 0, 00:13:17.291 "data_size": 65536 00:13:17.291 } 00:13:17.291 ] 00:13:17.291 }' 00:13:17.291 07:59:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:17.291 07:59:23 -- common/autotest_common.sh@10 -- # set +x 00:13:17.968 07:59:23 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:17.968 07:59:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:17.968 07:59:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:17.968 07:59:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.968 07:59:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:17.968 07:59:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:17.968 07:59:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:18.225 [2024-07-13 07:59:23.879410] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:18.225 07:59:23 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:18.225 07:59:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:18.225 07:59:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:18.225 07:59:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.482 07:59:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:18.482 07:59:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:18.482 07:59:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:18.482 [2024-07-13 07:59:24.197901] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:18.482 07:59:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:18.482 07:59:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:18.482 07:59:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.482 07:59:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:18.740 07:59:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:18.740 07:59:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:18.740 07:59:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:13:18.740 [2024-07-13 07:59:24.512404] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:18.740 [2024-07-13 07:59:24.512445] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027c80 name Existed_Raid, state offline 00:13:18.740 07:59:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:18.740 07:59:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:18.740 07:59:24 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:18.740 07:59:24 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.997 07:59:24 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:18.997 07:59:24 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:18.997 07:59:24 -- bdev/bdev_raid.sh@287 -- # killprocess 63415 00:13:18.997 07:59:24 -- common/autotest_common.sh@926 -- # '[' -z 63415 ']' 00:13:18.997 07:59:24 -- common/autotest_common.sh@930 -- # kill -0 63415 00:13:18.997 07:59:24 -- common/autotest_common.sh@931 -- # uname 00:13:18.997 07:59:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:18.997 07:59:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63415 00:13:18.997 killing process with pid 63415 00:13:18.997 07:59:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:18.997 07:59:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:18.997 07:59:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63415' 00:13:18.997 07:59:24 -- common/autotest_common.sh@945 -- # kill 63415 00:13:18.997 07:59:24 -- common/autotest_common.sh@950 -- # wait 63415 00:13:18.997 [2024-07-13 07:59:24.737731] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:18.997 [2024-07-13 07:59:24.737787] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:19.255 ************************************ 00:13:19.255 END TEST raid_state_function_test 00:13:19.255 ************************************ 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:19.255 00:13:19.255 real 0m10.142s 00:13:19.255 user 0m18.728s 00:13:19.255 sys 0m1.370s 00:13:19.255 07:59:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.255 07:59:24 -- common/autotest_common.sh@10 -- # set +x 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:13:19.255 07:59:24 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:19.255 07:59:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:19.255 07:59:24 -- common/autotest_common.sh@10 -- # set +x 00:13:19.255 ************************************ 00:13:19.255 START TEST raid_state_function_test_sb 00:13:19.255 ************************************ 00:13:19.255 07:59:24 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 true 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:13:19.255 Process raid pid: 63811 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@226 -- # raid_pid=63811 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 63811' 00:13:19.255 07:59:24 -- bdev/bdev_raid.sh@228 -- # waitforlisten 63811 /var/tmp/spdk-raid.sock 00:13:19.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:19.255 07:59:24 -- common/autotest_common.sh@819 -- # '[' -z 63811 ']' 00:13:19.255 07:59:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:19.255 07:59:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:19.255 07:59:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:19.255 07:59:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:19.255 07:59:24 -- common/autotest_common.sh@10 -- # set +x 00:13:19.514 [2024-07-13 07:59:25.120982] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:19.514 [2024-07-13 07:59:25.121140] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.514 [2024-07-13 07:59:25.252585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.514 [2024-07-13 07:59:25.296816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.772 [2024-07-13 07:59:25.342249] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.338 07:59:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:20.338 07:59:25 -- common/autotest_common.sh@852 -- # return 0 00:13:20.338 07:59:25 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:20.338 [2024-07-13 07:59:26.096771] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:20.338 [2024-07-13 07:59:26.096831] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:20.338 [2024-07-13 07:59:26.096842] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:20.338 [2024-07-13 07:59:26.096862] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:20.338 [2024-07-13 07:59:26.096869] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:20.338 [2024-07-13 07:59:26.096903] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:20.338 [2024-07-13 07:59:26.096910] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:20.338 [2024-07-13 07:59:26.096933] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:20.338 07:59:26 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:20.338 07:59:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:20.338 07:59:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:20.338 07:59:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:20.338 07:59:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:20.338 07:59:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:20.338 07:59:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:20.338 07:59:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:20.338 07:59:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:20.338 07:59:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:20.338 07:59:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.338 07:59:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.596 07:59:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:20.596 "name": "Existed_Raid", 00:13:20.596 "uuid": "c6e797c0-44a2-4285-850d-f2581c83ca3c", 00:13:20.596 "strip_size_kb": 64, 00:13:20.596 "state": "configuring", 00:13:20.596 "raid_level": "raid0", 00:13:20.596 "superblock": true, 00:13:20.596 "num_base_bdevs": 4, 00:13:20.596 "num_base_bdevs_discovered": 0, 00:13:20.596 "num_base_bdevs_operational": 4, 00:13:20.596 "base_bdevs_list": [ 00:13:20.596 { 00:13:20.596 "name": "BaseBdev1", 00:13:20.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.596 "is_configured": false, 00:13:20.596 "data_offset": 0, 00:13:20.596 "data_size": 0 00:13:20.596 }, 00:13:20.596 { 00:13:20.596 "name": "BaseBdev2", 00:13:20.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.596 "is_configured": false, 00:13:20.596 "data_offset": 0, 00:13:20.596 "data_size": 0 00:13:20.596 }, 00:13:20.596 { 00:13:20.596 "name": "BaseBdev3", 00:13:20.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.596 "is_configured": false, 00:13:20.596 "data_offset": 0, 00:13:20.596 "data_size": 0 00:13:20.596 }, 00:13:20.596 { 00:13:20.596 "name": "BaseBdev4", 00:13:20.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.596 "is_configured": false, 00:13:20.596 "data_offset": 0, 00:13:20.596 "data_size": 0 00:13:20.596 } 00:13:20.596 ] 00:13:20.596 }' 00:13:20.596 07:59:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:20.596 07:59:26 -- common/autotest_common.sh@10 -- # set +x 00:13:21.161 07:59:26 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:21.161 [2024-07-13 07:59:26.928803] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:21.161 [2024-07-13 07:59:26.928844] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000025880 name Existed_Raid, state configuring 00:13:21.161 07:59:26 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:21.419 [2024-07-13 07:59:27.088863] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:21.419 [2024-07-13 07:59:27.088914] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:21.419 [2024-07-13 07:59:27.088924] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.419 [2024-07-13 07:59:27.088946] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.419 [2024-07-13 07:59:27.088954] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:21.419 [2024-07-13 07:59:27.088983] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:21.419 [2024-07-13 07:59:27.088990] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:21.419 [2024-07-13 07:59:27.089013] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:21.419 07:59:27 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:21.677 [2024-07-13 07:59:27.247266] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:21.677 BaseBdev1 00:13:21.677 07:59:27 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:21.677 07:59:27 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:21.677 07:59:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:21.677 07:59:27 -- common/autotest_common.sh@889 -- # local i 00:13:21.677 07:59:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:21.677 07:59:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:21.677 07:59:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:21.677 07:59:27 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:21.935 [ 00:13:21.935 { 00:13:21.935 "name": "BaseBdev1", 00:13:21.935 "aliases": [ 00:13:21.935 "ee6e37e6-1857-409e-af32-bf278f45bdd3" 00:13:21.935 ], 00:13:21.935 "product_name": "Malloc disk", 00:13:21.935 "block_size": 512, 00:13:21.935 "num_blocks": 65536, 00:13:21.935 "uuid": "ee6e37e6-1857-409e-af32-bf278f45bdd3", 00:13:21.935 "assigned_rate_limits": { 00:13:21.935 "rw_ios_per_sec": 0, 00:13:21.935 "rw_mbytes_per_sec": 0, 00:13:21.935 "r_mbytes_per_sec": 0, 00:13:21.935 "w_mbytes_per_sec": 0 00:13:21.935 }, 00:13:21.935 "claimed": true, 00:13:21.935 "claim_type": "exclusive_write", 00:13:21.935 "zoned": false, 00:13:21.935 "supported_io_types": { 00:13:21.935 "read": true, 00:13:21.935 "write": true, 00:13:21.935 "unmap": true, 00:13:21.935 "write_zeroes": true, 00:13:21.935 "flush": true, 00:13:21.935 "reset": true, 00:13:21.935 "compare": false, 00:13:21.935 "compare_and_write": false, 00:13:21.936 "abort": true, 00:13:21.936 "nvme_admin": false, 00:13:21.936 "nvme_io": false 00:13:21.936 }, 00:13:21.936 "memory_domains": [ 00:13:21.936 { 00:13:21.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.936 "dma_device_type": 2 00:13:21.936 } 00:13:21.936 ], 00:13:21.936 "driver_specific": {} 00:13:21.936 } 00:13:21.936 ] 00:13:21.936 07:59:27 -- common/autotest_common.sh@895 -- # return 0 00:13:21.936 07:59:27 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:21.936 07:59:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:21.936 07:59:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:21.936 07:59:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:21.936 07:59:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:21.936 07:59:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:21.936 07:59:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:21.936 07:59:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:21.936 07:59:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:21.936 07:59:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:21.936 07:59:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.936 07:59:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.194 07:59:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:22.194 "name": "Existed_Raid", 00:13:22.194 "uuid": "dedff032-5c49-45d1-955c-2bcb3af972ba", 00:13:22.194 "strip_size_kb": 64, 00:13:22.194 "state": "configuring", 00:13:22.194 "raid_level": "raid0", 00:13:22.194 "superblock": true, 00:13:22.194 "num_base_bdevs": 4, 00:13:22.194 "num_base_bdevs_discovered": 1, 00:13:22.194 "num_base_bdevs_operational": 4, 00:13:22.194 "base_bdevs_list": [ 00:13:22.194 { 00:13:22.194 "name": "BaseBdev1", 00:13:22.194 "uuid": "ee6e37e6-1857-409e-af32-bf278f45bdd3", 00:13:22.194 "is_configured": true, 00:13:22.194 "data_offset": 2048, 00:13:22.194 "data_size": 63488 00:13:22.194 }, 00:13:22.194 { 00:13:22.194 "name": "BaseBdev2", 00:13:22.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.194 "is_configured": false, 00:13:22.194 "data_offset": 0, 00:13:22.194 "data_size": 0 00:13:22.194 }, 00:13:22.194 { 00:13:22.194 "name": "BaseBdev3", 00:13:22.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.194 "is_configured": false, 00:13:22.194 "data_offset": 0, 00:13:22.194 "data_size": 0 00:13:22.194 }, 00:13:22.194 { 00:13:22.194 "name": "BaseBdev4", 00:13:22.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.194 "is_configured": false, 00:13:22.194 "data_offset": 0, 00:13:22.194 "data_size": 0 00:13:22.194 } 00:13:22.194 ] 00:13:22.194 }' 00:13:22.194 07:59:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:22.194 07:59:27 -- common/autotest_common.sh@10 -- # set +x 00:13:22.761 07:59:28 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:23.019 [2024-07-13 07:59:28.603487] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:23.019 [2024-07-13 07:59:28.603539] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:13:23.019 07:59:28 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:13:23.019 07:59:28 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:23.019 07:59:28 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:23.278 BaseBdev1 00:13:23.278 07:59:29 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:13:23.278 07:59:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:23.278 07:59:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:23.278 07:59:29 -- common/autotest_common.sh@889 -- # local i 00:13:23.278 07:59:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:23.278 07:59:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:23.278 07:59:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:23.536 07:59:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:23.536 [ 00:13:23.536 { 00:13:23.536 "name": "BaseBdev1", 00:13:23.536 "aliases": [ 00:13:23.536 "7023eb04-1949-45bc-b8d9-dd3b418b9267" 00:13:23.536 ], 00:13:23.536 "product_name": "Malloc disk", 00:13:23.536 "block_size": 512, 00:13:23.536 "num_blocks": 65536, 00:13:23.536 "uuid": "7023eb04-1949-45bc-b8d9-dd3b418b9267", 00:13:23.536 "assigned_rate_limits": { 00:13:23.536 "rw_ios_per_sec": 0, 00:13:23.536 "rw_mbytes_per_sec": 0, 00:13:23.536 "r_mbytes_per_sec": 0, 00:13:23.536 "w_mbytes_per_sec": 0 00:13:23.536 }, 00:13:23.536 "claimed": false, 00:13:23.536 "zoned": false, 00:13:23.536 "supported_io_types": { 00:13:23.536 "read": true, 00:13:23.536 "write": true, 00:13:23.536 "unmap": true, 00:13:23.536 "write_zeroes": true, 00:13:23.536 "flush": true, 00:13:23.536 "reset": true, 00:13:23.536 "compare": false, 00:13:23.536 "compare_and_write": false, 00:13:23.536 "abort": true, 00:13:23.536 "nvme_admin": false, 00:13:23.536 "nvme_io": false 00:13:23.536 }, 00:13:23.536 "memory_domains": [ 00:13:23.536 { 00:13:23.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.536 "dma_device_type": 2 00:13:23.536 } 00:13:23.536 ], 00:13:23.536 "driver_specific": {} 00:13:23.536 } 00:13:23.536 ] 00:13:23.536 07:59:29 -- common/autotest_common.sh@895 -- # return 0 00:13:23.536 07:59:29 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:23.795 [2024-07-13 07:59:29.462596] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:23.795 [2024-07-13 07:59:29.463940] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:23.795 [2024-07-13 07:59:29.464005] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:23.795 [2024-07-13 07:59:29.464016] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:23.795 [2024-07-13 07:59:29.464036] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:23.795 [2024-07-13 07:59:29.464044] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:23.795 [2024-07-13 07:59:29.464061] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:23.795 07:59:29 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:23.795 07:59:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:23.795 07:59:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:23.795 07:59:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:23.795 07:59:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:23.795 07:59:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:23.795 07:59:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:23.795 07:59:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:23.795 07:59:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:23.795 07:59:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:23.795 07:59:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:23.795 07:59:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:23.795 07:59:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.795 07:59:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.053 07:59:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:24.053 "name": "Existed_Raid", 00:13:24.053 "uuid": "411c8b95-df4f-4d03-babb-f65325ca1899", 00:13:24.053 "strip_size_kb": 64, 00:13:24.053 "state": "configuring", 00:13:24.053 "raid_level": "raid0", 00:13:24.053 "superblock": true, 00:13:24.053 "num_base_bdevs": 4, 00:13:24.053 "num_base_bdevs_discovered": 1, 00:13:24.053 "num_base_bdevs_operational": 4, 00:13:24.053 "base_bdevs_list": [ 00:13:24.053 { 00:13:24.053 "name": "BaseBdev1", 00:13:24.053 "uuid": "7023eb04-1949-45bc-b8d9-dd3b418b9267", 00:13:24.053 "is_configured": true, 00:13:24.053 "data_offset": 2048, 00:13:24.054 "data_size": 63488 00:13:24.054 }, 00:13:24.054 { 00:13:24.054 "name": "BaseBdev2", 00:13:24.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.054 "is_configured": false, 00:13:24.054 "data_offset": 0, 00:13:24.054 "data_size": 0 00:13:24.054 }, 00:13:24.054 { 00:13:24.054 "name": "BaseBdev3", 00:13:24.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.054 "is_configured": false, 00:13:24.054 "data_offset": 0, 00:13:24.054 "data_size": 0 00:13:24.054 }, 00:13:24.054 { 00:13:24.054 "name": "BaseBdev4", 00:13:24.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.054 "is_configured": false, 00:13:24.054 "data_offset": 0, 00:13:24.054 "data_size": 0 00:13:24.054 } 00:13:24.054 ] 00:13:24.054 }' 00:13:24.054 07:59:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:24.054 07:59:29 -- common/autotest_common.sh@10 -- # set +x 00:13:24.619 07:59:30 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:24.878 [2024-07-13 07:59:30.526194] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:24.878 BaseBdev2 00:13:24.878 07:59:30 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:24.878 07:59:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:13:24.878 07:59:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:24.878 07:59:30 -- common/autotest_common.sh@889 -- # local i 00:13:24.878 07:59:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:24.878 07:59:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:24.878 07:59:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:24.878 07:59:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:25.137 [ 00:13:25.137 { 00:13:25.137 "name": "BaseBdev2", 00:13:25.137 "aliases": [ 00:13:25.137 "802e6753-82d8-47e0-bd00-f8cec152d8c9" 00:13:25.137 ], 00:13:25.137 "product_name": "Malloc disk", 00:13:25.137 "block_size": 512, 00:13:25.137 "num_blocks": 65536, 00:13:25.137 "uuid": "802e6753-82d8-47e0-bd00-f8cec152d8c9", 00:13:25.137 "assigned_rate_limits": { 00:13:25.137 "rw_ios_per_sec": 0, 00:13:25.137 "rw_mbytes_per_sec": 0, 00:13:25.137 "r_mbytes_per_sec": 0, 00:13:25.137 "w_mbytes_per_sec": 0 00:13:25.137 }, 00:13:25.137 "claimed": true, 00:13:25.137 "claim_type": "exclusive_write", 00:13:25.137 "zoned": false, 00:13:25.137 "supported_io_types": { 00:13:25.137 "read": true, 00:13:25.137 "write": true, 00:13:25.137 "unmap": true, 00:13:25.137 "write_zeroes": true, 00:13:25.137 "flush": true, 00:13:25.137 "reset": true, 00:13:25.137 "compare": false, 00:13:25.137 "compare_and_write": false, 00:13:25.137 "abort": true, 00:13:25.137 "nvme_admin": false, 00:13:25.137 "nvme_io": false 00:13:25.137 }, 00:13:25.137 "memory_domains": [ 00:13:25.137 { 00:13:25.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.137 "dma_device_type": 2 00:13:25.137 } 00:13:25.137 ], 00:13:25.137 "driver_specific": {} 00:13:25.137 } 00:13:25.137 ] 00:13:25.137 07:59:30 -- common/autotest_common.sh@895 -- # return 0 00:13:25.137 07:59:30 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:25.137 07:59:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:25.137 07:59:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:25.137 07:59:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:25.137 07:59:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:25.137 07:59:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:25.137 07:59:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:25.137 07:59:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:25.137 07:59:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:25.137 07:59:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:25.137 07:59:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:25.137 07:59:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:25.137 07:59:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.137 07:59:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.395 07:59:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:25.396 "name": "Existed_Raid", 00:13:25.396 "uuid": "411c8b95-df4f-4d03-babb-f65325ca1899", 00:13:25.396 "strip_size_kb": 64, 00:13:25.396 "state": "configuring", 00:13:25.396 "raid_level": "raid0", 00:13:25.396 "superblock": true, 00:13:25.396 "num_base_bdevs": 4, 00:13:25.396 "num_base_bdevs_discovered": 2, 00:13:25.396 "num_base_bdevs_operational": 4, 00:13:25.396 "base_bdevs_list": [ 00:13:25.396 { 00:13:25.396 "name": "BaseBdev1", 00:13:25.396 "uuid": "7023eb04-1949-45bc-b8d9-dd3b418b9267", 00:13:25.396 "is_configured": true, 00:13:25.396 "data_offset": 2048, 00:13:25.396 "data_size": 63488 00:13:25.396 }, 00:13:25.396 { 00:13:25.396 "name": "BaseBdev2", 00:13:25.396 "uuid": "802e6753-82d8-47e0-bd00-f8cec152d8c9", 00:13:25.396 "is_configured": true, 00:13:25.396 "data_offset": 2048, 00:13:25.396 "data_size": 63488 00:13:25.396 }, 00:13:25.396 { 00:13:25.396 "name": "BaseBdev3", 00:13:25.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.396 "is_configured": false, 00:13:25.396 "data_offset": 0, 00:13:25.396 "data_size": 0 00:13:25.396 }, 00:13:25.396 { 00:13:25.396 "name": "BaseBdev4", 00:13:25.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.396 "is_configured": false, 00:13:25.396 "data_offset": 0, 00:13:25.396 "data_size": 0 00:13:25.396 } 00:13:25.396 ] 00:13:25.396 }' 00:13:25.396 07:59:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:25.396 07:59:31 -- common/autotest_common.sh@10 -- # set +x 00:13:25.962 07:59:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:26.221 BaseBdev3 00:13:26.221 [2024-07-13 07:59:31.801920] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:26.221 07:59:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:13:26.221 07:59:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:13:26.221 07:59:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:26.221 07:59:31 -- common/autotest_common.sh@889 -- # local i 00:13:26.221 07:59:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:26.221 07:59:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:26.221 07:59:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:26.221 07:59:31 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:26.480 [ 00:13:26.480 { 00:13:26.480 "name": "BaseBdev3", 00:13:26.480 "aliases": [ 00:13:26.480 "a1255b47-019d-47e8-ae0a-f15961896bdd" 00:13:26.480 ], 00:13:26.480 "product_name": "Malloc disk", 00:13:26.480 "block_size": 512, 00:13:26.480 "num_blocks": 65536, 00:13:26.480 "uuid": "a1255b47-019d-47e8-ae0a-f15961896bdd", 00:13:26.480 "assigned_rate_limits": { 00:13:26.480 "rw_ios_per_sec": 0, 00:13:26.480 "rw_mbytes_per_sec": 0, 00:13:26.480 "r_mbytes_per_sec": 0, 00:13:26.480 "w_mbytes_per_sec": 0 00:13:26.480 }, 00:13:26.480 "claimed": true, 00:13:26.480 "claim_type": "exclusive_write", 00:13:26.480 "zoned": false, 00:13:26.480 "supported_io_types": { 00:13:26.480 "read": true, 00:13:26.480 "write": true, 00:13:26.480 "unmap": true, 00:13:26.480 "write_zeroes": true, 00:13:26.480 "flush": true, 00:13:26.480 "reset": true, 00:13:26.480 "compare": false, 00:13:26.480 "compare_and_write": false, 00:13:26.480 "abort": true, 00:13:26.480 "nvme_admin": false, 00:13:26.480 "nvme_io": false 00:13:26.480 }, 00:13:26.480 "memory_domains": [ 00:13:26.480 { 00:13:26.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.480 "dma_device_type": 2 00:13:26.480 } 00:13:26.480 ], 00:13:26.480 "driver_specific": {} 00:13:26.480 } 00:13:26.480 ] 00:13:26.480 07:59:32 -- common/autotest_common.sh@895 -- # return 0 00:13:26.480 07:59:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:26.480 07:59:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:26.480 07:59:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:26.480 07:59:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:26.480 07:59:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:26.480 07:59:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:26.480 07:59:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:26.480 07:59:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:26.480 07:59:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:26.480 07:59:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:26.480 07:59:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:26.480 07:59:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:26.480 07:59:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.480 07:59:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.739 07:59:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:26.739 "name": "Existed_Raid", 00:13:26.739 "uuid": "411c8b95-df4f-4d03-babb-f65325ca1899", 00:13:26.739 "strip_size_kb": 64, 00:13:26.739 "state": "configuring", 00:13:26.739 "raid_level": "raid0", 00:13:26.739 "superblock": true, 00:13:26.739 "num_base_bdevs": 4, 00:13:26.739 "num_base_bdevs_discovered": 3, 00:13:26.739 "num_base_bdevs_operational": 4, 00:13:26.739 "base_bdevs_list": [ 00:13:26.739 { 00:13:26.739 "name": "BaseBdev1", 00:13:26.739 "uuid": "7023eb04-1949-45bc-b8d9-dd3b418b9267", 00:13:26.739 "is_configured": true, 00:13:26.739 "data_offset": 2048, 00:13:26.739 "data_size": 63488 00:13:26.739 }, 00:13:26.739 { 00:13:26.739 "name": "BaseBdev2", 00:13:26.739 "uuid": "802e6753-82d8-47e0-bd00-f8cec152d8c9", 00:13:26.739 "is_configured": true, 00:13:26.739 "data_offset": 2048, 00:13:26.739 "data_size": 63488 00:13:26.739 }, 00:13:26.739 { 00:13:26.739 "name": "BaseBdev3", 00:13:26.739 "uuid": "a1255b47-019d-47e8-ae0a-f15961896bdd", 00:13:26.739 "is_configured": true, 00:13:26.739 "data_offset": 2048, 00:13:26.739 "data_size": 63488 00:13:26.739 }, 00:13:26.739 { 00:13:26.739 "name": "BaseBdev4", 00:13:26.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.739 "is_configured": false, 00:13:26.739 "data_offset": 0, 00:13:26.739 "data_size": 0 00:13:26.739 } 00:13:26.739 ] 00:13:26.739 }' 00:13:26.739 07:59:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:26.739 07:59:32 -- common/autotest_common.sh@10 -- # set +x 00:13:27.305 07:59:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:27.305 BaseBdev4 00:13:27.305 [2024-07-13 07:59:33.013684] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:27.305 [2024-07-13 07:59:33.013801] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028280 00:13:27.305 [2024-07-13 07:59:33.013813] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:27.305 [2024-07-13 07:59:33.013893] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:13:27.305 [2024-07-13 07:59:33.014054] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028280 00:13:27.305 [2024-07-13 07:59:33.014063] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028280 00:13:27.305 [2024-07-13 07:59:33.014123] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.305 07:59:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:13:27.305 07:59:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:13:27.305 07:59:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:27.305 07:59:33 -- common/autotest_common.sh@889 -- # local i 00:13:27.305 07:59:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:27.305 07:59:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:27.305 07:59:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:27.563 07:59:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:27.563 [ 00:13:27.563 { 00:13:27.563 "name": "BaseBdev4", 00:13:27.563 "aliases": [ 00:13:27.563 "a1f89c64-53d1-41bf-9d8b-0a991ef161f0" 00:13:27.563 ], 00:13:27.563 "product_name": "Malloc disk", 00:13:27.563 "block_size": 512, 00:13:27.563 "num_blocks": 65536, 00:13:27.563 "uuid": "a1f89c64-53d1-41bf-9d8b-0a991ef161f0", 00:13:27.563 "assigned_rate_limits": { 00:13:27.563 "rw_ios_per_sec": 0, 00:13:27.563 "rw_mbytes_per_sec": 0, 00:13:27.563 "r_mbytes_per_sec": 0, 00:13:27.563 "w_mbytes_per_sec": 0 00:13:27.563 }, 00:13:27.563 "claimed": true, 00:13:27.563 "claim_type": "exclusive_write", 00:13:27.563 "zoned": false, 00:13:27.563 "supported_io_types": { 00:13:27.563 "read": true, 00:13:27.563 "write": true, 00:13:27.563 "unmap": true, 00:13:27.563 "write_zeroes": true, 00:13:27.563 "flush": true, 00:13:27.563 "reset": true, 00:13:27.563 "compare": false, 00:13:27.563 "compare_and_write": false, 00:13:27.563 "abort": true, 00:13:27.563 "nvme_admin": false, 00:13:27.563 "nvme_io": false 00:13:27.563 }, 00:13:27.563 "memory_domains": [ 00:13:27.563 { 00:13:27.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.563 "dma_device_type": 2 00:13:27.563 } 00:13:27.563 ], 00:13:27.563 "driver_specific": {} 00:13:27.563 } 00:13:27.563 ] 00:13:27.563 07:59:33 -- common/autotest_common.sh@895 -- # return 0 00:13:27.563 07:59:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:27.563 07:59:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:27.563 07:59:33 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:27.563 07:59:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:27.563 07:59:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:27.563 07:59:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:27.563 07:59:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:27.563 07:59:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:27.563 07:59:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:27.563 07:59:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:27.563 07:59:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:27.563 07:59:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:27.563 07:59:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.563 07:59:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.820 07:59:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:27.820 "name": "Existed_Raid", 00:13:27.820 "uuid": "411c8b95-df4f-4d03-babb-f65325ca1899", 00:13:27.820 "strip_size_kb": 64, 00:13:27.820 "state": "online", 00:13:27.820 "raid_level": "raid0", 00:13:27.820 "superblock": true, 00:13:27.820 "num_base_bdevs": 4, 00:13:27.820 "num_base_bdevs_discovered": 4, 00:13:27.820 "num_base_bdevs_operational": 4, 00:13:27.820 "base_bdevs_list": [ 00:13:27.820 { 00:13:27.820 "name": "BaseBdev1", 00:13:27.820 "uuid": "7023eb04-1949-45bc-b8d9-dd3b418b9267", 00:13:27.820 "is_configured": true, 00:13:27.820 "data_offset": 2048, 00:13:27.820 "data_size": 63488 00:13:27.820 }, 00:13:27.820 { 00:13:27.820 "name": "BaseBdev2", 00:13:27.820 "uuid": "802e6753-82d8-47e0-bd00-f8cec152d8c9", 00:13:27.820 "is_configured": true, 00:13:27.821 "data_offset": 2048, 00:13:27.821 "data_size": 63488 00:13:27.821 }, 00:13:27.821 { 00:13:27.821 "name": "BaseBdev3", 00:13:27.821 "uuid": "a1255b47-019d-47e8-ae0a-f15961896bdd", 00:13:27.821 "is_configured": true, 00:13:27.821 "data_offset": 2048, 00:13:27.821 "data_size": 63488 00:13:27.821 }, 00:13:27.821 { 00:13:27.821 "name": "BaseBdev4", 00:13:27.821 "uuid": "a1f89c64-53d1-41bf-9d8b-0a991ef161f0", 00:13:27.821 "is_configured": true, 00:13:27.821 "data_offset": 2048, 00:13:27.821 "data_size": 63488 00:13:27.821 } 00:13:27.821 ] 00:13:27.821 }' 00:13:27.821 07:59:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:27.821 07:59:33 -- common/autotest_common.sh@10 -- # set +x 00:13:28.386 07:59:34 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:28.645 [2024-07-13 07:59:34.325911] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:28.645 [2024-07-13 07:59:34.325937] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:28.645 [2024-07-13 07:59:34.325982] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.645 07:59:34 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:28.645 07:59:34 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:13:28.645 07:59:34 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:28.645 07:59:34 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:28.645 07:59:34 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:13:28.645 07:59:34 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:28.645 07:59:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:28.645 07:59:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:28.645 07:59:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:28.645 07:59:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:28.645 07:59:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:28.645 07:59:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:28.645 07:59:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:28.645 07:59:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:28.645 07:59:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:28.645 07:59:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.645 07:59:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.904 07:59:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:28.904 "name": "Existed_Raid", 00:13:28.904 "uuid": "411c8b95-df4f-4d03-babb-f65325ca1899", 00:13:28.904 "strip_size_kb": 64, 00:13:28.904 "state": "offline", 00:13:28.904 "raid_level": "raid0", 00:13:28.904 "superblock": true, 00:13:28.904 "num_base_bdevs": 4, 00:13:28.904 "num_base_bdevs_discovered": 3, 00:13:28.904 "num_base_bdevs_operational": 3, 00:13:28.904 "base_bdevs_list": [ 00:13:28.904 { 00:13:28.904 "name": null, 00:13:28.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.904 "is_configured": false, 00:13:28.904 "data_offset": 2048, 00:13:28.904 "data_size": 63488 00:13:28.904 }, 00:13:28.904 { 00:13:28.904 "name": "BaseBdev2", 00:13:28.904 "uuid": "802e6753-82d8-47e0-bd00-f8cec152d8c9", 00:13:28.904 "is_configured": true, 00:13:28.904 "data_offset": 2048, 00:13:28.904 "data_size": 63488 00:13:28.904 }, 00:13:28.904 { 00:13:28.904 "name": "BaseBdev3", 00:13:28.904 "uuid": "a1255b47-019d-47e8-ae0a-f15961896bdd", 00:13:28.904 "is_configured": true, 00:13:28.904 "data_offset": 2048, 00:13:28.904 "data_size": 63488 00:13:28.904 }, 00:13:28.904 { 00:13:28.904 "name": "BaseBdev4", 00:13:28.904 "uuid": "a1f89c64-53d1-41bf-9d8b-0a991ef161f0", 00:13:28.904 "is_configured": true, 00:13:28.904 "data_offset": 2048, 00:13:28.904 "data_size": 63488 00:13:28.904 } 00:13:28.904 ] 00:13:28.904 }' 00:13:28.904 07:59:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:28.904 07:59:34 -- common/autotest_common.sh@10 -- # set +x 00:13:29.471 07:59:35 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:29.471 07:59:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:29.471 07:59:35 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.471 07:59:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:29.729 07:59:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:29.729 07:59:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:29.729 07:59:35 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:29.988 [2024-07-13 07:59:35.619263] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:29.988 07:59:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:29.988 07:59:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:29.988 07:59:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:29.988 07:59:35 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.246 07:59:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:30.246 07:59:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:30.246 07:59:35 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:30.246 [2024-07-13 07:59:36.001738] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:30.246 07:59:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:30.246 07:59:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:30.246 07:59:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:30.246 07:59:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.505 07:59:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:30.505 07:59:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:30.505 07:59:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:13:30.763 [2024-07-13 07:59:36.418152] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:30.763 [2024-07-13 07:59:36.418193] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028280 name Existed_Raid, state offline 00:13:30.763 07:59:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:30.763 07:59:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:30.763 07:59:36 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.763 07:59:36 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:31.021 07:59:36 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:31.021 07:59:36 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:31.021 07:59:36 -- bdev/bdev_raid.sh@287 -- # killprocess 63811 00:13:31.021 07:59:36 -- common/autotest_common.sh@926 -- # '[' -z 63811 ']' 00:13:31.021 07:59:36 -- common/autotest_common.sh@930 -- # kill -0 63811 00:13:31.021 07:59:36 -- common/autotest_common.sh@931 -- # uname 00:13:31.021 07:59:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:31.021 07:59:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63811 00:13:31.021 07:59:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:31.021 killing process with pid 63811 00:13:31.021 07:59:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:31.021 07:59:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63811' 00:13:31.021 07:59:36 -- common/autotest_common.sh@945 -- # kill 63811 00:13:31.021 07:59:36 -- common/autotest_common.sh@950 -- # wait 63811 00:13:31.021 [2024-07-13 07:59:36.610835] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:31.021 [2024-07-13 07:59:36.610894] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:31.021 ************************************ 00:13:31.021 END TEST raid_state_function_test_sb 00:13:31.021 ************************************ 00:13:31.021 07:59:36 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:31.021 00:13:31.021 real 0m11.823s 00:13:31.021 user 0m22.001s 00:13:31.021 sys 0m1.459s 00:13:31.021 07:59:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:31.021 07:59:36 -- common/autotest_common.sh@10 -- # set +x 00:13:31.280 07:59:36 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:13:31.280 07:59:36 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:31.280 07:59:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:31.280 07:59:36 -- common/autotest_common.sh@10 -- # set +x 00:13:31.280 ************************************ 00:13:31.280 START TEST raid_superblock_test 00:13:31.280 ************************************ 00:13:31.280 07:59:36 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 4 00:13:31.280 07:59:36 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:13:31.280 07:59:36 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:13:31.280 07:59:36 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:13:31.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:31.280 07:59:36 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:13:31.280 07:59:36 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:13:31.280 07:59:36 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:13:31.280 07:59:36 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:13:31.280 07:59:36 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:13:31.280 07:59:36 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:13:31.280 07:59:36 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:13:31.280 07:59:36 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:13:31.280 07:59:36 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:13:31.280 07:59:36 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:13:31.280 07:59:36 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:13:31.280 07:59:36 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:13:31.280 07:59:36 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:13:31.280 07:59:36 -- bdev/bdev_raid.sh@357 -- # raid_pid=64234 00:13:31.280 07:59:36 -- bdev/bdev_raid.sh@358 -- # waitforlisten 64234 /var/tmp/spdk-raid.sock 00:13:31.280 07:59:36 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:31.280 07:59:36 -- common/autotest_common.sh@819 -- # '[' -z 64234 ']' 00:13:31.280 07:59:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:31.280 07:59:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:31.280 07:59:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:31.280 07:59:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:31.280 07:59:36 -- common/autotest_common.sh@10 -- # set +x 00:13:31.280 [2024-07-13 07:59:36.995520] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:31.280 [2024-07-13 07:59:36.995690] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64234 ] 00:13:31.539 [2024-07-13 07:59:37.125357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.539 [2024-07-13 07:59:37.169429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.539 [2024-07-13 07:59:37.214262] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.106 07:59:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:32.106 07:59:37 -- common/autotest_common.sh@852 -- # return 0 00:13:32.106 07:59:37 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:13:32.106 07:59:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:32.106 07:59:37 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:13:32.106 07:59:37 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:13:32.106 07:59:37 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:32.106 07:59:37 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:32.106 07:59:37 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:32.106 07:59:37 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:32.106 07:59:37 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:32.363 malloc1 00:13:32.363 07:59:37 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:32.363 [2024-07-13 07:59:38.097807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:32.363 [2024-07-13 07:59:38.097879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.363 [2024-07-13 07:59:38.097922] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000026180 00:13:32.363 [2024-07-13 07:59:38.097956] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.363 [2024-07-13 07:59:38.099606] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.363 [2024-07-13 07:59:38.099647] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:32.363 pt1 00:13:32.363 07:59:38 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:32.363 07:59:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:32.363 07:59:38 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:13:32.363 07:59:38 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:13:32.363 07:59:38 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:32.363 07:59:38 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:32.363 07:59:38 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:32.363 07:59:38 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:32.363 07:59:38 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:32.619 malloc2 00:13:32.619 07:59:38 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:32.619 [2024-07-13 07:59:38.378596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:32.619 [2024-07-13 07:59:38.378652] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.619 [2024-07-13 07:59:38.378708] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027f80 00:13:32.619 [2024-07-13 07:59:38.378740] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.619 [2024-07-13 07:59:38.380195] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.619 [2024-07-13 07:59:38.380234] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:32.619 pt2 00:13:32.619 07:59:38 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:32.619 07:59:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:32.619 07:59:38 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:13:32.619 07:59:38 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:13:32.619 07:59:38 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:32.619 07:59:38 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:32.619 07:59:38 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:32.619 07:59:38 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:32.619 07:59:38 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:13:32.876 malloc3 00:13:32.876 07:59:38 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:32.876 [2024-07-13 07:59:38.667502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:32.876 [2024-07-13 07:59:38.667564] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.876 [2024-07-13 07:59:38.667623] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029d80 00:13:32.876 [2024-07-13 07:59:38.667657] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.876 [2024-07-13 07:59:38.669097] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.876 [2024-07-13 07:59:38.669133] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:32.876 pt3 00:13:32.876 07:59:38 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:32.876 07:59:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:32.876 07:59:38 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:13:32.876 07:59:38 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:13:32.876 07:59:38 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:32.876 07:59:38 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:32.876 07:59:38 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:32.876 07:59:38 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:32.876 07:59:38 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:13:33.134 malloc4 00:13:33.134 07:59:38 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:33.391 [2024-07-13 07:59:39.008387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:33.391 [2024-07-13 07:59:39.008590] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.391 [2024-07-13 07:59:39.008641] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002bb80 00:13:33.391 [2024-07-13 07:59:39.008676] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.391 [2024-07-13 07:59:39.010199] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.391 [2024-07-13 07:59:39.010237] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:33.391 pt4 00:13:33.391 07:59:39 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:33.391 07:59:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:33.391 07:59:39 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:13:33.649 [2024-07-13 07:59:39.228514] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:33.649 [2024-07-13 07:59:39.230027] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:33.649 [2024-07-13 07:59:39.230062] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:33.649 [2024-07-13 07:59:39.230084] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:33.649 [2024-07-13 07:59:39.230177] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002d080 00:13:33.649 [2024-07-13 07:59:39.230187] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:33.649 [2024-07-13 07:59:39.230261] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:13:33.649 [2024-07-13 07:59:39.230437] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002d080 00:13:33.649 [2024-07-13 07:59:39.230446] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002d080 00:13:33.649 [2024-07-13 07:59:39.230526] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.649 07:59:39 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:33.649 07:59:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:33.649 07:59:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:33.649 07:59:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:33.649 07:59:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:33.649 07:59:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:33.649 07:59:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:33.650 07:59:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:33.650 07:59:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:33.650 07:59:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:33.650 07:59:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.650 07:59:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:33.650 07:59:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:33.650 "name": "raid_bdev1", 00:13:33.650 "uuid": "2c0b02a0-d53b-4382-a703-b57ee88256cc", 00:13:33.650 "strip_size_kb": 64, 00:13:33.650 "state": "online", 00:13:33.650 "raid_level": "raid0", 00:13:33.650 "superblock": true, 00:13:33.650 "num_base_bdevs": 4, 00:13:33.650 "num_base_bdevs_discovered": 4, 00:13:33.650 "num_base_bdevs_operational": 4, 00:13:33.650 "base_bdevs_list": [ 00:13:33.650 { 00:13:33.650 "name": "pt1", 00:13:33.650 "uuid": "69d30a86-a39f-5985-93aa-ad65dd2e62b0", 00:13:33.650 "is_configured": true, 00:13:33.650 "data_offset": 2048, 00:13:33.650 "data_size": 63488 00:13:33.650 }, 00:13:33.650 { 00:13:33.650 "name": "pt2", 00:13:33.650 "uuid": "ee67409e-2d3a-51f8-975b-a96d2bebfd23", 00:13:33.650 "is_configured": true, 00:13:33.650 "data_offset": 2048, 00:13:33.650 "data_size": 63488 00:13:33.650 }, 00:13:33.650 { 00:13:33.650 "name": "pt3", 00:13:33.650 "uuid": "e7dec53e-a828-51a4-8cdf-7a436ff24571", 00:13:33.650 "is_configured": true, 00:13:33.650 "data_offset": 2048, 00:13:33.650 "data_size": 63488 00:13:33.650 }, 00:13:33.650 { 00:13:33.650 "name": "pt4", 00:13:33.650 "uuid": "6a28f1cb-3d98-5136-a34c-97d8d180fbf6", 00:13:33.650 "is_configured": true, 00:13:33.650 "data_offset": 2048, 00:13:33.650 "data_size": 63488 00:13:33.650 } 00:13:33.650 ] 00:13:33.650 }' 00:13:33.650 07:59:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:33.650 07:59:39 -- common/autotest_common.sh@10 -- # set +x 00:13:34.216 07:59:39 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:34.216 07:59:39 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:13:34.473 [2024-07-13 07:59:40.092633] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:34.473 07:59:40 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=2c0b02a0-d53b-4382-a703-b57ee88256cc 00:13:34.474 07:59:40 -- bdev/bdev_raid.sh@380 -- # '[' -z 2c0b02a0-d53b-4382-a703-b57ee88256cc ']' 00:13:34.474 07:59:40 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:34.731 [2024-07-13 07:59:40.304534] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:34.731 [2024-07-13 07:59:40.304562] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:34.731 [2024-07-13 07:59:40.304628] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:34.731 [2024-07-13 07:59:40.304677] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:34.731 [2024-07-13 07:59:40.304686] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002d080 name raid_bdev1, state offline 00:13:34.731 07:59:40 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.731 07:59:40 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:13:34.731 07:59:40 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:13:34.731 07:59:40 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:13:34.731 07:59:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:34.731 07:59:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:34.989 07:59:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:34.989 07:59:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:35.247 07:59:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:35.247 07:59:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:35.247 07:59:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:35.247 07:59:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:13:35.505 07:59:41 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:35.505 07:59:41 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:35.763 07:59:41 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:13:35.763 07:59:41 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:35.763 07:59:41 -- common/autotest_common.sh@640 -- # local es=0 00:13:35.763 07:59:41 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:35.763 07:59:41 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:35.763 07:59:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:35.763 07:59:41 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:35.763 07:59:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:35.763 07:59:41 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:35.763 07:59:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:35.763 07:59:41 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:35.763 07:59:41 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:35.763 07:59:41 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:35.763 [2024-07-13 07:59:41.476646] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:35.763 [2024-07-13 07:59:41.477974] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:35.763 [2024-07-13 07:59:41.478005] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:35.763 [2024-07-13 07:59:41.478022] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:35.763 [2024-07-13 07:59:41.478049] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:13:35.763 [2024-07-13 07:59:41.478109] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:13:35.763 [2024-07-13 07:59:41.478136] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:13:35.763 [2024-07-13 07:59:41.478174] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:13:35.763 [2024-07-13 07:59:41.478194] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:35.763 [2024-07-13 07:59:41.478203] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002d680 name raid_bdev1, state configuring 00:13:35.763 request: 00:13:35.763 { 00:13:35.763 "name": "raid_bdev1", 00:13:35.763 "raid_level": "raid0", 00:13:35.763 "base_bdevs": [ 00:13:35.763 "malloc1", 00:13:35.763 "malloc2", 00:13:35.763 "malloc3", 00:13:35.763 "malloc4" 00:13:35.763 ], 00:13:35.763 "superblock": false, 00:13:35.763 "strip_size_kb": 64, 00:13:35.763 "method": "bdev_raid_create", 00:13:35.763 "req_id": 1 00:13:35.763 } 00:13:35.763 Got JSON-RPC error response 00:13:35.763 response: 00:13:35.763 { 00:13:35.763 "code": -17, 00:13:35.763 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:35.763 } 00:13:35.763 07:59:41 -- common/autotest_common.sh@643 -- # es=1 00:13:35.763 07:59:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:35.763 07:59:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:35.763 07:59:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:35.763 07:59:41 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:13:35.763 07:59:41 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.020 07:59:41 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:13:36.020 07:59:41 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:13:36.020 07:59:41 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:36.293 [2024-07-13 07:59:41.836653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:36.293 [2024-07-13 07:59:41.836719] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.293 [2024-07-13 07:59:41.836772] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002eb80 00:13:36.293 [2024-07-13 07:59:41.836796] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.293 [2024-07-13 07:59:41.838538] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.293 [2024-07-13 07:59:41.838595] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:36.293 [2024-07-13 07:59:41.838668] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:13:36.293 [2024-07-13 07:59:41.838705] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:36.293 pt1 00:13:36.293 07:59:41 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:36.293 07:59:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:36.293 07:59:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:36.293 07:59:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:36.293 07:59:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:36.293 07:59:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:36.293 07:59:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:36.293 07:59:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:36.293 07:59:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:36.293 07:59:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:36.293 07:59:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.293 07:59:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.293 07:59:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:36.293 "name": "raid_bdev1", 00:13:36.293 "uuid": "2c0b02a0-d53b-4382-a703-b57ee88256cc", 00:13:36.293 "strip_size_kb": 64, 00:13:36.293 "state": "configuring", 00:13:36.293 "raid_level": "raid0", 00:13:36.293 "superblock": true, 00:13:36.293 "num_base_bdevs": 4, 00:13:36.293 "num_base_bdevs_discovered": 1, 00:13:36.293 "num_base_bdevs_operational": 4, 00:13:36.293 "base_bdevs_list": [ 00:13:36.293 { 00:13:36.293 "name": "pt1", 00:13:36.293 "uuid": "69d30a86-a39f-5985-93aa-ad65dd2e62b0", 00:13:36.293 "is_configured": true, 00:13:36.293 "data_offset": 2048, 00:13:36.293 "data_size": 63488 00:13:36.293 }, 00:13:36.293 { 00:13:36.293 "name": null, 00:13:36.293 "uuid": "ee67409e-2d3a-51f8-975b-a96d2bebfd23", 00:13:36.293 "is_configured": false, 00:13:36.293 "data_offset": 2048, 00:13:36.293 "data_size": 63488 00:13:36.293 }, 00:13:36.293 { 00:13:36.293 "name": null, 00:13:36.293 "uuid": "e7dec53e-a828-51a4-8cdf-7a436ff24571", 00:13:36.293 "is_configured": false, 00:13:36.293 "data_offset": 2048, 00:13:36.293 "data_size": 63488 00:13:36.293 }, 00:13:36.293 { 00:13:36.293 "name": null, 00:13:36.293 "uuid": "6a28f1cb-3d98-5136-a34c-97d8d180fbf6", 00:13:36.293 "is_configured": false, 00:13:36.293 "data_offset": 2048, 00:13:36.293 "data_size": 63488 00:13:36.293 } 00:13:36.293 ] 00:13:36.293 }' 00:13:36.293 07:59:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:36.293 07:59:42 -- common/autotest_common.sh@10 -- # set +x 00:13:36.915 07:59:42 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:13:36.915 07:59:42 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:36.915 [2024-07-13 07:59:42.660765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:36.915 [2024-07-13 07:59:42.660836] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.915 [2024-07-13 07:59:42.660883] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000030980 00:13:36.915 [2024-07-13 07:59:42.660906] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.915 [2024-07-13 07:59:42.661170] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.915 [2024-07-13 07:59:42.661201] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:36.915 [2024-07-13 07:59:42.661255] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:36.915 [2024-07-13 07:59:42.661274] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:36.915 pt2 00:13:36.915 07:59:42 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:37.172 [2024-07-13 07:59:42.820794] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:37.172 07:59:42 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:37.172 07:59:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:37.172 07:59:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:37.172 07:59:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:37.172 07:59:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:37.172 07:59:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:37.172 07:59:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:37.172 07:59:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:37.172 07:59:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:37.172 07:59:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:37.172 07:59:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.172 07:59:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:37.429 07:59:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:37.429 "name": "raid_bdev1", 00:13:37.429 "uuid": "2c0b02a0-d53b-4382-a703-b57ee88256cc", 00:13:37.429 "strip_size_kb": 64, 00:13:37.429 "state": "configuring", 00:13:37.429 "raid_level": "raid0", 00:13:37.429 "superblock": true, 00:13:37.429 "num_base_bdevs": 4, 00:13:37.429 "num_base_bdevs_discovered": 1, 00:13:37.429 "num_base_bdevs_operational": 4, 00:13:37.429 "base_bdevs_list": [ 00:13:37.429 { 00:13:37.429 "name": "pt1", 00:13:37.429 "uuid": "69d30a86-a39f-5985-93aa-ad65dd2e62b0", 00:13:37.429 "is_configured": true, 00:13:37.429 "data_offset": 2048, 00:13:37.429 "data_size": 63488 00:13:37.429 }, 00:13:37.429 { 00:13:37.429 "name": null, 00:13:37.429 "uuid": "ee67409e-2d3a-51f8-975b-a96d2bebfd23", 00:13:37.429 "is_configured": false, 00:13:37.429 "data_offset": 2048, 00:13:37.429 "data_size": 63488 00:13:37.429 }, 00:13:37.429 { 00:13:37.429 "name": null, 00:13:37.429 "uuid": "e7dec53e-a828-51a4-8cdf-7a436ff24571", 00:13:37.429 "is_configured": false, 00:13:37.429 "data_offset": 2048, 00:13:37.429 "data_size": 63488 00:13:37.429 }, 00:13:37.429 { 00:13:37.429 "name": null, 00:13:37.429 "uuid": "6a28f1cb-3d98-5136-a34c-97d8d180fbf6", 00:13:37.429 "is_configured": false, 00:13:37.429 "data_offset": 2048, 00:13:37.429 "data_size": 63488 00:13:37.429 } 00:13:37.429 ] 00:13:37.429 }' 00:13:37.429 07:59:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:37.429 07:59:43 -- common/autotest_common.sh@10 -- # set +x 00:13:37.997 07:59:43 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:13:37.997 07:59:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:37.997 07:59:43 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:37.997 [2024-07-13 07:59:43.768841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:37.997 [2024-07-13 07:59:43.768916] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.997 [2024-07-13 07:59:43.768956] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031e80 00:13:37.997 [2024-07-13 07:59:43.768974] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.997 [2024-07-13 07:59:43.769232] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.997 [2024-07-13 07:59:43.769266] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:37.997 [2024-07-13 07:59:43.769315] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:37.997 [2024-07-13 07:59:43.769333] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:37.997 pt2 00:13:37.997 07:59:43 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:13:37.997 07:59:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:37.997 07:59:43 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:38.255 [2024-07-13 07:59:43.988900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:38.255 [2024-07-13 07:59:43.988984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.255 [2024-07-13 07:59:43.989019] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000033380 00:13:38.255 [2024-07-13 07:59:43.989043] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.255 [2024-07-13 07:59:43.989286] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.255 [2024-07-13 07:59:43.989496] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:38.255 [2024-07-13 07:59:43.989558] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:13:38.255 [2024-07-13 07:59:43.989587] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:38.256 pt3 00:13:38.256 07:59:44 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:13:38.256 07:59:44 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:38.256 07:59:44 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:38.513 [2024-07-13 07:59:44.144897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:38.513 [2024-07-13 07:59:44.144955] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.513 [2024-07-13 07:59:44.144990] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000034880 00:13:38.513 [2024-07-13 07:59:44.145019] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.513 [2024-07-13 07:59:44.145253] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.513 [2024-07-13 07:59:44.145288] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:38.513 [2024-07-13 07:59:44.145330] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:13:38.513 [2024-07-13 07:59:44.145346] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:38.513 [2024-07-13 07:59:44.145408] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000030380 00:13:38.513 [2024-07-13 07:59:44.145417] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:38.513 [2024-07-13 07:59:44.145634] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:13:38.513 [2024-07-13 07:59:44.145814] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000030380 00:13:38.513 [2024-07-13 07:59:44.145825] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000030380 00:13:38.513 [2024-07-13 07:59:44.145879] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.513 pt4 00:13:38.513 07:59:44 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:13:38.513 07:59:44 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:38.513 07:59:44 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:38.513 07:59:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:38.513 07:59:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:38.513 07:59:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:38.513 07:59:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:38.513 07:59:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:38.513 07:59:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:38.513 07:59:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:38.513 07:59:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:38.513 07:59:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:38.513 07:59:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:38.513 07:59:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.771 07:59:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:38.771 "name": "raid_bdev1", 00:13:38.771 "uuid": "2c0b02a0-d53b-4382-a703-b57ee88256cc", 00:13:38.771 "strip_size_kb": 64, 00:13:38.771 "state": "online", 00:13:38.771 "raid_level": "raid0", 00:13:38.771 "superblock": true, 00:13:38.771 "num_base_bdevs": 4, 00:13:38.771 "num_base_bdevs_discovered": 4, 00:13:38.771 "num_base_bdevs_operational": 4, 00:13:38.771 "base_bdevs_list": [ 00:13:38.771 { 00:13:38.771 "name": "pt1", 00:13:38.771 "uuid": "69d30a86-a39f-5985-93aa-ad65dd2e62b0", 00:13:38.771 "is_configured": true, 00:13:38.771 "data_offset": 2048, 00:13:38.771 "data_size": 63488 00:13:38.771 }, 00:13:38.771 { 00:13:38.771 "name": "pt2", 00:13:38.771 "uuid": "ee67409e-2d3a-51f8-975b-a96d2bebfd23", 00:13:38.771 "is_configured": true, 00:13:38.771 "data_offset": 2048, 00:13:38.771 "data_size": 63488 00:13:38.771 }, 00:13:38.771 { 00:13:38.771 "name": "pt3", 00:13:38.771 "uuid": "e7dec53e-a828-51a4-8cdf-7a436ff24571", 00:13:38.771 "is_configured": true, 00:13:38.771 "data_offset": 2048, 00:13:38.771 "data_size": 63488 00:13:38.771 }, 00:13:38.771 { 00:13:38.771 "name": "pt4", 00:13:38.771 "uuid": "6a28f1cb-3d98-5136-a34c-97d8d180fbf6", 00:13:38.771 "is_configured": true, 00:13:38.771 "data_offset": 2048, 00:13:38.771 "data_size": 63488 00:13:38.771 } 00:13:38.771 ] 00:13:38.771 }' 00:13:38.771 07:59:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:38.771 07:59:44 -- common/autotest_common.sh@10 -- # set +x 00:13:39.337 07:59:44 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:39.337 07:59:44 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:13:39.595 [2024-07-13 07:59:45.173155] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.595 07:59:45 -- bdev/bdev_raid.sh@430 -- # '[' 2c0b02a0-d53b-4382-a703-b57ee88256cc '!=' 2c0b02a0-d53b-4382-a703-b57ee88256cc ']' 00:13:39.595 07:59:45 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:13:39.595 07:59:45 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:39.595 07:59:45 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:39.595 07:59:45 -- bdev/bdev_raid.sh@511 -- # killprocess 64234 00:13:39.595 07:59:45 -- common/autotest_common.sh@926 -- # '[' -z 64234 ']' 00:13:39.595 07:59:45 -- common/autotest_common.sh@930 -- # kill -0 64234 00:13:39.595 07:59:45 -- common/autotest_common.sh@931 -- # uname 00:13:39.595 07:59:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:39.595 07:59:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64234 00:13:39.595 killing process with pid 64234 00:13:39.595 07:59:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:39.595 07:59:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:39.595 07:59:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64234' 00:13:39.595 07:59:45 -- common/autotest_common.sh@945 -- # kill 64234 00:13:39.595 07:59:45 -- common/autotest_common.sh@950 -- # wait 64234 00:13:39.595 [2024-07-13 07:59:45.217471] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:39.595 [2024-07-13 07:59:45.217533] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:39.595 [2024-07-13 07:59:45.217574] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:39.595 [2024-07-13 07:59:45.217583] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000030380 name raid_bdev1, state offline 00:13:39.595 [2024-07-13 07:59:45.257421] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@513 -- # return 0 00:13:39.853 00:13:39.853 real 0m8.580s 00:13:39.853 user 0m15.546s 00:13:39.853 sys 0m1.120s 00:13:39.853 07:59:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:39.853 ************************************ 00:13:39.853 END TEST raid_superblock_test 00:13:39.853 ************************************ 00:13:39.853 07:59:45 -- common/autotest_common.sh@10 -- # set +x 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:13:39.853 07:59:45 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:39.853 07:59:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:39.853 07:59:45 -- common/autotest_common.sh@10 -- # set +x 00:13:39.853 ************************************ 00:13:39.853 START TEST raid_state_function_test 00:13:39.853 ************************************ 00:13:39.853 07:59:45 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 false 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:13:39.853 Process raid pid: 64527 00:13:39.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@226 -- # raid_pid=64527 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 64527' 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@228 -- # waitforlisten 64527 /var/tmp/spdk-raid.sock 00:13:39.853 07:59:45 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:39.853 07:59:45 -- common/autotest_common.sh@819 -- # '[' -z 64527 ']' 00:13:39.853 07:59:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:39.853 07:59:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:39.853 07:59:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:39.853 07:59:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:39.853 07:59:45 -- common/autotest_common.sh@10 -- # set +x 00:13:39.853 [2024-07-13 07:59:45.637330] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:39.853 [2024-07-13 07:59:45.637498] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.112 [2024-07-13 07:59:45.772976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.112 [2024-07-13 07:59:45.822990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.112 [2024-07-13 07:59:45.873027] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.678 07:59:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:40.678 07:59:46 -- common/autotest_common.sh@852 -- # return 0 00:13:40.678 07:59:46 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:40.937 [2024-07-13 07:59:46.634059] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:40.937 [2024-07-13 07:59:46.634124] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:40.937 [2024-07-13 07:59:46.634134] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:40.937 [2024-07-13 07:59:46.634154] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:40.937 [2024-07-13 07:59:46.634161] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:40.937 [2024-07-13 07:59:46.634194] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:40.937 [2024-07-13 07:59:46.634201] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:40.937 [2024-07-13 07:59:46.634221] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:40.937 07:59:46 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:40.937 07:59:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:40.937 07:59:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:40.937 07:59:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:40.937 07:59:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:40.937 07:59:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:40.937 07:59:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:40.937 07:59:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:40.937 07:59:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:40.937 07:59:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:40.937 07:59:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.937 07:59:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.195 07:59:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:41.195 "name": "Existed_Raid", 00:13:41.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.195 "strip_size_kb": 64, 00:13:41.195 "state": "configuring", 00:13:41.195 "raid_level": "concat", 00:13:41.195 "superblock": false, 00:13:41.195 "num_base_bdevs": 4, 00:13:41.195 "num_base_bdevs_discovered": 0, 00:13:41.195 "num_base_bdevs_operational": 4, 00:13:41.195 "base_bdevs_list": [ 00:13:41.195 { 00:13:41.195 "name": "BaseBdev1", 00:13:41.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.195 "is_configured": false, 00:13:41.195 "data_offset": 0, 00:13:41.195 "data_size": 0 00:13:41.195 }, 00:13:41.195 { 00:13:41.195 "name": "BaseBdev2", 00:13:41.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.195 "is_configured": false, 00:13:41.195 "data_offset": 0, 00:13:41.195 "data_size": 0 00:13:41.195 }, 00:13:41.195 { 00:13:41.195 "name": "BaseBdev3", 00:13:41.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.195 "is_configured": false, 00:13:41.195 "data_offset": 0, 00:13:41.195 "data_size": 0 00:13:41.195 }, 00:13:41.195 { 00:13:41.195 "name": "BaseBdev4", 00:13:41.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.195 "is_configured": false, 00:13:41.195 "data_offset": 0, 00:13:41.195 "data_size": 0 00:13:41.195 } 00:13:41.195 ] 00:13:41.195 }' 00:13:41.195 07:59:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:41.195 07:59:46 -- common/autotest_common.sh@10 -- # set +x 00:13:41.762 07:59:47 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:41.762 [2024-07-13 07:59:47.570103] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:41.762 [2024-07-13 07:59:47.570140] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000025880 name Existed_Raid, state configuring 00:13:42.021 07:59:47 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:42.021 [2024-07-13 07:59:47.722148] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:42.021 [2024-07-13 07:59:47.722201] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:42.021 [2024-07-13 07:59:47.722211] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:42.021 [2024-07-13 07:59:47.722232] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:42.021 [2024-07-13 07:59:47.722240] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:42.021 [2024-07-13 07:59:47.722262] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:42.021 [2024-07-13 07:59:47.722270] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:42.021 [2024-07-13 07:59:47.722290] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:42.021 07:59:47 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:42.280 BaseBdev1 00:13:42.280 [2024-07-13 07:59:47.877594] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:42.280 07:59:47 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:42.280 07:59:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:42.280 07:59:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:42.280 07:59:47 -- common/autotest_common.sh@889 -- # local i 00:13:42.280 07:59:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:42.280 07:59:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:42.280 07:59:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:42.539 07:59:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:42.539 [ 00:13:42.539 { 00:13:42.539 "name": "BaseBdev1", 00:13:42.539 "aliases": [ 00:13:42.539 "53391896-ba34-46c8-ad6f-6649d831db99" 00:13:42.539 ], 00:13:42.539 "product_name": "Malloc disk", 00:13:42.539 "block_size": 512, 00:13:42.539 "num_blocks": 65536, 00:13:42.539 "uuid": "53391896-ba34-46c8-ad6f-6649d831db99", 00:13:42.539 "assigned_rate_limits": { 00:13:42.539 "rw_ios_per_sec": 0, 00:13:42.539 "rw_mbytes_per_sec": 0, 00:13:42.539 "r_mbytes_per_sec": 0, 00:13:42.539 "w_mbytes_per_sec": 0 00:13:42.539 }, 00:13:42.539 "claimed": true, 00:13:42.539 "claim_type": "exclusive_write", 00:13:42.539 "zoned": false, 00:13:42.539 "supported_io_types": { 00:13:42.539 "read": true, 00:13:42.539 "write": true, 00:13:42.539 "unmap": true, 00:13:42.539 "write_zeroes": true, 00:13:42.539 "flush": true, 00:13:42.539 "reset": true, 00:13:42.539 "compare": false, 00:13:42.539 "compare_and_write": false, 00:13:42.539 "abort": true, 00:13:42.539 "nvme_admin": false, 00:13:42.539 "nvme_io": false 00:13:42.539 }, 00:13:42.539 "memory_domains": [ 00:13:42.539 { 00:13:42.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.539 "dma_device_type": 2 00:13:42.539 } 00:13:42.539 ], 00:13:42.539 "driver_specific": {} 00:13:42.539 } 00:13:42.539 ] 00:13:42.539 07:59:48 -- common/autotest_common.sh@895 -- # return 0 00:13:42.539 07:59:48 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:42.539 07:59:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:42.539 07:59:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:42.539 07:59:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:42.539 07:59:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:42.539 07:59:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:42.539 07:59:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:42.539 07:59:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:42.539 07:59:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:42.539 07:59:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:42.539 07:59:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.539 07:59:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.799 07:59:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:42.799 "name": "Existed_Raid", 00:13:42.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.799 "strip_size_kb": 64, 00:13:42.799 "state": "configuring", 00:13:42.799 "raid_level": "concat", 00:13:42.799 "superblock": false, 00:13:42.799 "num_base_bdevs": 4, 00:13:42.799 "num_base_bdevs_discovered": 1, 00:13:42.799 "num_base_bdevs_operational": 4, 00:13:42.799 "base_bdevs_list": [ 00:13:42.799 { 00:13:42.799 "name": "BaseBdev1", 00:13:42.799 "uuid": "53391896-ba34-46c8-ad6f-6649d831db99", 00:13:42.799 "is_configured": true, 00:13:42.799 "data_offset": 0, 00:13:42.799 "data_size": 65536 00:13:42.799 }, 00:13:42.799 { 00:13:42.799 "name": "BaseBdev2", 00:13:42.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.799 "is_configured": false, 00:13:42.799 "data_offset": 0, 00:13:42.799 "data_size": 0 00:13:42.799 }, 00:13:42.799 { 00:13:42.799 "name": "BaseBdev3", 00:13:42.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.799 "is_configured": false, 00:13:42.799 "data_offset": 0, 00:13:42.799 "data_size": 0 00:13:42.799 }, 00:13:42.799 { 00:13:42.799 "name": "BaseBdev4", 00:13:42.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.799 "is_configured": false, 00:13:42.799 "data_offset": 0, 00:13:42.799 "data_size": 0 00:13:42.799 } 00:13:42.799 ] 00:13:42.799 }' 00:13:42.799 07:59:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:42.799 07:59:48 -- common/autotest_common.sh@10 -- # set +x 00:13:43.364 07:59:48 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:43.364 [2024-07-13 07:59:49.089747] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:43.364 [2024-07-13 07:59:49.089786] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:13:43.364 07:59:49 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:13:43.364 07:59:49 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:43.623 [2024-07-13 07:59:49.229821] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:43.623 [2024-07-13 07:59:49.231182] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:43.623 [2024-07-13 07:59:49.231251] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:43.623 [2024-07-13 07:59:49.231261] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:43.623 [2024-07-13 07:59:49.231284] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:43.623 [2024-07-13 07:59:49.231292] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:43.623 [2024-07-13 07:59:49.231311] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:43.623 07:59:49 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:43.623 07:59:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:43.623 07:59:49 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:43.623 07:59:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:43.623 07:59:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:43.623 07:59:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:43.623 07:59:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:43.623 07:59:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:43.623 07:59:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:43.623 07:59:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:43.623 07:59:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:43.623 07:59:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:43.623 07:59:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.623 07:59:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.881 07:59:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:43.881 "name": "Existed_Raid", 00:13:43.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.881 "strip_size_kb": 64, 00:13:43.881 "state": "configuring", 00:13:43.881 "raid_level": "concat", 00:13:43.881 "superblock": false, 00:13:43.881 "num_base_bdevs": 4, 00:13:43.881 "num_base_bdevs_discovered": 1, 00:13:43.881 "num_base_bdevs_operational": 4, 00:13:43.881 "base_bdevs_list": [ 00:13:43.881 { 00:13:43.881 "name": "BaseBdev1", 00:13:43.881 "uuid": "53391896-ba34-46c8-ad6f-6649d831db99", 00:13:43.881 "is_configured": true, 00:13:43.881 "data_offset": 0, 00:13:43.881 "data_size": 65536 00:13:43.881 }, 00:13:43.881 { 00:13:43.881 "name": "BaseBdev2", 00:13:43.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.881 "is_configured": false, 00:13:43.881 "data_offset": 0, 00:13:43.881 "data_size": 0 00:13:43.881 }, 00:13:43.881 { 00:13:43.881 "name": "BaseBdev3", 00:13:43.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.881 "is_configured": false, 00:13:43.881 "data_offset": 0, 00:13:43.881 "data_size": 0 00:13:43.881 }, 00:13:43.881 { 00:13:43.881 "name": "BaseBdev4", 00:13:43.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.881 "is_configured": false, 00:13:43.881 "data_offset": 0, 00:13:43.881 "data_size": 0 00:13:43.881 } 00:13:43.881 ] 00:13:43.881 }' 00:13:43.881 07:59:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:43.881 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:13:44.447 07:59:49 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:44.447 [2024-07-13 07:59:50.101406] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:44.447 BaseBdev2 00:13:44.447 07:59:50 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:44.447 07:59:50 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:13:44.447 07:59:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:44.447 07:59:50 -- common/autotest_common.sh@889 -- # local i 00:13:44.447 07:59:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:44.447 07:59:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:44.447 07:59:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:44.705 07:59:50 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:44.705 [ 00:13:44.705 { 00:13:44.705 "name": "BaseBdev2", 00:13:44.705 "aliases": [ 00:13:44.705 "9b89aa0d-3205-49ba-a361-f6b2a47d7fd1" 00:13:44.705 ], 00:13:44.705 "product_name": "Malloc disk", 00:13:44.705 "block_size": 512, 00:13:44.705 "num_blocks": 65536, 00:13:44.705 "uuid": "9b89aa0d-3205-49ba-a361-f6b2a47d7fd1", 00:13:44.705 "assigned_rate_limits": { 00:13:44.705 "rw_ios_per_sec": 0, 00:13:44.705 "rw_mbytes_per_sec": 0, 00:13:44.705 "r_mbytes_per_sec": 0, 00:13:44.705 "w_mbytes_per_sec": 0 00:13:44.705 }, 00:13:44.705 "claimed": true, 00:13:44.705 "claim_type": "exclusive_write", 00:13:44.705 "zoned": false, 00:13:44.705 "supported_io_types": { 00:13:44.705 "read": true, 00:13:44.705 "write": true, 00:13:44.705 "unmap": true, 00:13:44.705 "write_zeroes": true, 00:13:44.705 "flush": true, 00:13:44.705 "reset": true, 00:13:44.705 "compare": false, 00:13:44.705 "compare_and_write": false, 00:13:44.705 "abort": true, 00:13:44.705 "nvme_admin": false, 00:13:44.705 "nvme_io": false 00:13:44.705 }, 00:13:44.705 "memory_domains": [ 00:13:44.705 { 00:13:44.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.705 "dma_device_type": 2 00:13:44.705 } 00:13:44.705 ], 00:13:44.705 "driver_specific": {} 00:13:44.705 } 00:13:44.705 ] 00:13:44.705 07:59:50 -- common/autotest_common.sh@895 -- # return 0 00:13:44.705 07:59:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:44.705 07:59:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:44.705 07:59:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:44.705 07:59:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:44.705 07:59:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:44.705 07:59:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:44.705 07:59:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:44.705 07:59:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:44.705 07:59:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:44.705 07:59:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:44.705 07:59:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:44.705 07:59:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:44.705 07:59:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.705 07:59:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:44.964 07:59:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:44.964 "name": "Existed_Raid", 00:13:44.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.964 "strip_size_kb": 64, 00:13:44.964 "state": "configuring", 00:13:44.964 "raid_level": "concat", 00:13:44.964 "superblock": false, 00:13:44.964 "num_base_bdevs": 4, 00:13:44.964 "num_base_bdevs_discovered": 2, 00:13:44.964 "num_base_bdevs_operational": 4, 00:13:44.964 "base_bdevs_list": [ 00:13:44.964 { 00:13:44.964 "name": "BaseBdev1", 00:13:44.964 "uuid": "53391896-ba34-46c8-ad6f-6649d831db99", 00:13:44.964 "is_configured": true, 00:13:44.964 "data_offset": 0, 00:13:44.964 "data_size": 65536 00:13:44.964 }, 00:13:44.964 { 00:13:44.964 "name": "BaseBdev2", 00:13:44.964 "uuid": "9b89aa0d-3205-49ba-a361-f6b2a47d7fd1", 00:13:44.964 "is_configured": true, 00:13:44.964 "data_offset": 0, 00:13:44.964 "data_size": 65536 00:13:44.964 }, 00:13:44.964 { 00:13:44.964 "name": "BaseBdev3", 00:13:44.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.964 "is_configured": false, 00:13:44.964 "data_offset": 0, 00:13:44.964 "data_size": 0 00:13:44.964 }, 00:13:44.964 { 00:13:44.964 "name": "BaseBdev4", 00:13:44.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.964 "is_configured": false, 00:13:44.964 "data_offset": 0, 00:13:44.964 "data_size": 0 00:13:44.964 } 00:13:44.964 ] 00:13:44.964 }' 00:13:44.964 07:59:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:44.964 07:59:50 -- common/autotest_common.sh@10 -- # set +x 00:13:45.532 07:59:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:45.791 [2024-07-13 07:59:51.453145] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.791 BaseBdev3 00:13:45.791 07:59:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:13:45.791 07:59:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:13:45.791 07:59:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:45.791 07:59:51 -- common/autotest_common.sh@889 -- # local i 00:13:45.791 07:59:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:45.791 07:59:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:45.791 07:59:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:46.050 07:59:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:46.050 [ 00:13:46.050 { 00:13:46.050 "name": "BaseBdev3", 00:13:46.050 "aliases": [ 00:13:46.050 "2778c581-9290-46b5-ad90-3710f6dc32f6" 00:13:46.050 ], 00:13:46.050 "product_name": "Malloc disk", 00:13:46.050 "block_size": 512, 00:13:46.050 "num_blocks": 65536, 00:13:46.050 "uuid": "2778c581-9290-46b5-ad90-3710f6dc32f6", 00:13:46.050 "assigned_rate_limits": { 00:13:46.050 "rw_ios_per_sec": 0, 00:13:46.050 "rw_mbytes_per_sec": 0, 00:13:46.050 "r_mbytes_per_sec": 0, 00:13:46.050 "w_mbytes_per_sec": 0 00:13:46.050 }, 00:13:46.050 "claimed": true, 00:13:46.050 "claim_type": "exclusive_write", 00:13:46.050 "zoned": false, 00:13:46.050 "supported_io_types": { 00:13:46.050 "read": true, 00:13:46.050 "write": true, 00:13:46.050 "unmap": true, 00:13:46.050 "write_zeroes": true, 00:13:46.050 "flush": true, 00:13:46.050 "reset": true, 00:13:46.050 "compare": false, 00:13:46.050 "compare_and_write": false, 00:13:46.050 "abort": true, 00:13:46.050 "nvme_admin": false, 00:13:46.050 "nvme_io": false 00:13:46.050 }, 00:13:46.050 "memory_domains": [ 00:13:46.050 { 00:13:46.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.050 "dma_device_type": 2 00:13:46.050 } 00:13:46.050 ], 00:13:46.050 "driver_specific": {} 00:13:46.050 } 00:13:46.050 ] 00:13:46.050 07:59:51 -- common/autotest_common.sh@895 -- # return 0 00:13:46.050 07:59:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:46.050 07:59:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:46.050 07:59:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:46.050 07:59:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:46.050 07:59:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:46.050 07:59:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:46.050 07:59:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:46.050 07:59:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:46.050 07:59:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:46.050 07:59:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:46.050 07:59:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:46.050 07:59:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:46.050 07:59:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.050 07:59:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:46.309 07:59:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:46.309 "name": "Existed_Raid", 00:13:46.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.309 "strip_size_kb": 64, 00:13:46.309 "state": "configuring", 00:13:46.309 "raid_level": "concat", 00:13:46.309 "superblock": false, 00:13:46.309 "num_base_bdevs": 4, 00:13:46.309 "num_base_bdevs_discovered": 3, 00:13:46.309 "num_base_bdevs_operational": 4, 00:13:46.309 "base_bdevs_list": [ 00:13:46.309 { 00:13:46.309 "name": "BaseBdev1", 00:13:46.309 "uuid": "53391896-ba34-46c8-ad6f-6649d831db99", 00:13:46.309 "is_configured": true, 00:13:46.309 "data_offset": 0, 00:13:46.309 "data_size": 65536 00:13:46.309 }, 00:13:46.309 { 00:13:46.309 "name": "BaseBdev2", 00:13:46.309 "uuid": "9b89aa0d-3205-49ba-a361-f6b2a47d7fd1", 00:13:46.309 "is_configured": true, 00:13:46.309 "data_offset": 0, 00:13:46.309 "data_size": 65536 00:13:46.309 }, 00:13:46.309 { 00:13:46.309 "name": "BaseBdev3", 00:13:46.309 "uuid": "2778c581-9290-46b5-ad90-3710f6dc32f6", 00:13:46.309 "is_configured": true, 00:13:46.309 "data_offset": 0, 00:13:46.309 "data_size": 65536 00:13:46.309 }, 00:13:46.309 { 00:13:46.309 "name": "BaseBdev4", 00:13:46.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.309 "is_configured": false, 00:13:46.309 "data_offset": 0, 00:13:46.309 "data_size": 0 00:13:46.309 } 00:13:46.309 ] 00:13:46.309 }' 00:13:46.309 07:59:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:46.309 07:59:51 -- common/autotest_common.sh@10 -- # set +x 00:13:46.877 07:59:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:47.136 [2024-07-13 07:59:52.772845] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:47.136 [2024-07-13 07:59:52.772883] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000027c80 00:13:47.136 [2024-07-13 07:59:52.772892] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:47.136 [2024-07-13 07:59:52.772984] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:13:47.136 [2024-07-13 07:59:52.773166] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000027c80 00:13:47.136 [2024-07-13 07:59:52.773175] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000027c80 00:13:47.136 [2024-07-13 07:59:52.773288] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.136 BaseBdev4 00:13:47.136 07:59:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:13:47.136 07:59:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:13:47.136 07:59:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:47.136 07:59:52 -- common/autotest_common.sh@889 -- # local i 00:13:47.136 07:59:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:47.136 07:59:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:47.136 07:59:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:47.136 07:59:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:47.394 [ 00:13:47.394 { 00:13:47.394 "name": "BaseBdev4", 00:13:47.394 "aliases": [ 00:13:47.394 "d2e696ea-33c1-4dfa-b563-6ad947d9f1d3" 00:13:47.394 ], 00:13:47.394 "product_name": "Malloc disk", 00:13:47.394 "block_size": 512, 00:13:47.394 "num_blocks": 65536, 00:13:47.394 "uuid": "d2e696ea-33c1-4dfa-b563-6ad947d9f1d3", 00:13:47.394 "assigned_rate_limits": { 00:13:47.394 "rw_ios_per_sec": 0, 00:13:47.394 "rw_mbytes_per_sec": 0, 00:13:47.394 "r_mbytes_per_sec": 0, 00:13:47.394 "w_mbytes_per_sec": 0 00:13:47.394 }, 00:13:47.394 "claimed": true, 00:13:47.394 "claim_type": "exclusive_write", 00:13:47.394 "zoned": false, 00:13:47.394 "supported_io_types": { 00:13:47.394 "read": true, 00:13:47.394 "write": true, 00:13:47.394 "unmap": true, 00:13:47.394 "write_zeroes": true, 00:13:47.394 "flush": true, 00:13:47.394 "reset": true, 00:13:47.394 "compare": false, 00:13:47.394 "compare_and_write": false, 00:13:47.394 "abort": true, 00:13:47.394 "nvme_admin": false, 00:13:47.394 "nvme_io": false 00:13:47.394 }, 00:13:47.394 "memory_domains": [ 00:13:47.394 { 00:13:47.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.394 "dma_device_type": 2 00:13:47.394 } 00:13:47.394 ], 00:13:47.394 "driver_specific": {} 00:13:47.394 } 00:13:47.394 ] 00:13:47.394 07:59:53 -- common/autotest_common.sh@895 -- # return 0 00:13:47.394 07:59:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:47.394 07:59:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:47.394 07:59:53 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:47.394 07:59:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:47.394 07:59:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:47.394 07:59:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:47.394 07:59:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:47.394 07:59:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:47.394 07:59:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:47.394 07:59:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:47.394 07:59:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:47.394 07:59:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:47.394 07:59:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.394 07:59:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.652 07:59:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:47.652 "name": "Existed_Raid", 00:13:47.652 "uuid": "74581add-eb8e-44d1-a57a-4913ea30d2e5", 00:13:47.652 "strip_size_kb": 64, 00:13:47.652 "state": "online", 00:13:47.652 "raid_level": "concat", 00:13:47.652 "superblock": false, 00:13:47.652 "num_base_bdevs": 4, 00:13:47.652 "num_base_bdevs_discovered": 4, 00:13:47.652 "num_base_bdevs_operational": 4, 00:13:47.652 "base_bdevs_list": [ 00:13:47.652 { 00:13:47.652 "name": "BaseBdev1", 00:13:47.652 "uuid": "53391896-ba34-46c8-ad6f-6649d831db99", 00:13:47.652 "is_configured": true, 00:13:47.652 "data_offset": 0, 00:13:47.652 "data_size": 65536 00:13:47.652 }, 00:13:47.652 { 00:13:47.652 "name": "BaseBdev2", 00:13:47.652 "uuid": "9b89aa0d-3205-49ba-a361-f6b2a47d7fd1", 00:13:47.652 "is_configured": true, 00:13:47.652 "data_offset": 0, 00:13:47.652 "data_size": 65536 00:13:47.652 }, 00:13:47.652 { 00:13:47.652 "name": "BaseBdev3", 00:13:47.652 "uuid": "2778c581-9290-46b5-ad90-3710f6dc32f6", 00:13:47.652 "is_configured": true, 00:13:47.652 "data_offset": 0, 00:13:47.652 "data_size": 65536 00:13:47.652 }, 00:13:47.652 { 00:13:47.652 "name": "BaseBdev4", 00:13:47.652 "uuid": "d2e696ea-33c1-4dfa-b563-6ad947d9f1d3", 00:13:47.652 "is_configured": true, 00:13:47.652 "data_offset": 0, 00:13:47.652 "data_size": 65536 00:13:47.652 } 00:13:47.652 ] 00:13:47.652 }' 00:13:47.652 07:59:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:47.652 07:59:53 -- common/autotest_common.sh@10 -- # set +x 00:13:48.241 07:59:53 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:48.500 [2024-07-13 07:59:54.061101] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:48.500 [2024-07-13 07:59:54.061128] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:48.500 [2024-07-13 07:59:54.061170] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:48.500 07:59:54 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:48.500 07:59:54 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:13:48.500 07:59:54 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:48.500 07:59:54 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:48.501 07:59:54 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:13:48.501 07:59:54 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:13:48.501 07:59:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:48.501 07:59:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:48.501 07:59:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:48.501 07:59:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:48.501 07:59:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:48.501 07:59:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:48.501 07:59:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:48.501 07:59:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:48.501 07:59:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:48.501 07:59:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.501 07:59:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.501 07:59:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:48.501 "name": "Existed_Raid", 00:13:48.501 "uuid": "74581add-eb8e-44d1-a57a-4913ea30d2e5", 00:13:48.501 "strip_size_kb": 64, 00:13:48.501 "state": "offline", 00:13:48.501 "raid_level": "concat", 00:13:48.501 "superblock": false, 00:13:48.501 "num_base_bdevs": 4, 00:13:48.501 "num_base_bdevs_discovered": 3, 00:13:48.501 "num_base_bdevs_operational": 3, 00:13:48.501 "base_bdevs_list": [ 00:13:48.501 { 00:13:48.501 "name": null, 00:13:48.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.501 "is_configured": false, 00:13:48.501 "data_offset": 0, 00:13:48.501 "data_size": 65536 00:13:48.501 }, 00:13:48.501 { 00:13:48.501 "name": "BaseBdev2", 00:13:48.501 "uuid": "9b89aa0d-3205-49ba-a361-f6b2a47d7fd1", 00:13:48.501 "is_configured": true, 00:13:48.501 "data_offset": 0, 00:13:48.501 "data_size": 65536 00:13:48.501 }, 00:13:48.501 { 00:13:48.501 "name": "BaseBdev3", 00:13:48.501 "uuid": "2778c581-9290-46b5-ad90-3710f6dc32f6", 00:13:48.501 "is_configured": true, 00:13:48.501 "data_offset": 0, 00:13:48.501 "data_size": 65536 00:13:48.501 }, 00:13:48.501 { 00:13:48.501 "name": "BaseBdev4", 00:13:48.501 "uuid": "d2e696ea-33c1-4dfa-b563-6ad947d9f1d3", 00:13:48.501 "is_configured": true, 00:13:48.501 "data_offset": 0, 00:13:48.501 "data_size": 65536 00:13:48.501 } 00:13:48.501 ] 00:13:48.501 }' 00:13:48.501 07:59:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:48.501 07:59:54 -- common/autotest_common.sh@10 -- # set +x 00:13:49.069 07:59:54 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:49.069 07:59:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:49.069 07:59:54 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.069 07:59:54 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:49.328 07:59:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:49.328 07:59:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:49.328 07:59:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:49.588 [2024-07-13 07:59:55.233378] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:49.588 07:59:55 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:49.588 07:59:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:49.588 07:59:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:49.588 07:59:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.847 07:59:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:49.847 07:59:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:49.847 07:59:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:49.847 [2024-07-13 07:59:55.607835] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:49.847 07:59:55 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:49.847 07:59:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:49.847 07:59:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.847 07:59:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:50.106 07:59:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:50.106 07:59:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:50.106 07:59:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:13:50.365 [2024-07-13 07:59:55.927026] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:50.365 [2024-07-13 07:59:55.927070] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027c80 name Existed_Raid, state offline 00:13:50.365 07:59:55 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:50.365 07:59:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:50.365 07:59:55 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.365 07:59:55 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:50.365 07:59:56 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:50.365 07:59:56 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:50.365 07:59:56 -- bdev/bdev_raid.sh@287 -- # killprocess 64527 00:13:50.365 07:59:56 -- common/autotest_common.sh@926 -- # '[' -z 64527 ']' 00:13:50.365 07:59:56 -- common/autotest_common.sh@930 -- # kill -0 64527 00:13:50.365 07:59:56 -- common/autotest_common.sh@931 -- # uname 00:13:50.365 07:59:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:50.365 07:59:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64527 00:13:50.624 07:59:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:50.624 07:59:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:50.624 killing process with pid 64527 00:13:50.624 07:59:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64527' 00:13:50.624 07:59:56 -- common/autotest_common.sh@945 -- # kill 64527 00:13:50.624 07:59:56 -- common/autotest_common.sh@950 -- # wait 64527 00:13:50.624 [2024-07-13 07:59:56.199811] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:50.624 [2024-07-13 07:59:56.199862] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:50.624 07:59:56 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:50.624 00:13:50.624 real 0m10.909s 00:13:50.624 user 0m20.112s 00:13:50.624 sys 0m1.446s 00:13:50.624 ************************************ 00:13:50.624 END TEST raid_state_function_test 00:13:50.624 ************************************ 00:13:50.624 07:59:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.624 07:59:56 -- common/autotest_common.sh@10 -- # set +x 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:13:50.883 07:59:56 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:50.883 07:59:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:50.883 07:59:56 -- common/autotest_common.sh@10 -- # set +x 00:13:50.883 ************************************ 00:13:50.883 START TEST raid_state_function_test_sb 00:13:50.883 ************************************ 00:13:50.883 07:59:56 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 true 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:13:50.883 Process raid pid: 64928 00:13:50.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@226 -- # raid_pid=64928 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 64928' 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@228 -- # waitforlisten 64928 /var/tmp/spdk-raid.sock 00:13:50.883 07:59:56 -- common/autotest_common.sh@819 -- # '[' -z 64928 ']' 00:13:50.883 07:59:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:50.883 07:59:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:50.883 07:59:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:50.883 07:59:56 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:50.883 07:59:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:50.883 07:59:56 -- common/autotest_common.sh@10 -- # set +x 00:13:50.883 [2024-07-13 07:59:56.591791] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:50.883 [2024-07-13 07:59:56.592026] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.150 [2024-07-13 07:59:56.744841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.150 [2024-07-13 07:59:56.794380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.150 [2024-07-13 07:59:56.844022] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.722 07:59:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:51.722 07:59:57 -- common/autotest_common.sh@852 -- # return 0 00:13:51.722 07:59:57 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:51.722 [2024-07-13 07:59:57.513436] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:51.722 [2024-07-13 07:59:57.513505] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:51.722 [2024-07-13 07:59:57.513517] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:51.722 [2024-07-13 07:59:57.513537] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:51.722 [2024-07-13 07:59:57.513545] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:51.722 [2024-07-13 07:59:57.513577] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:51.722 [2024-07-13 07:59:57.513585] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:51.722 [2024-07-13 07:59:57.513606] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:51.722 07:59:57 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:51.722 07:59:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:51.722 07:59:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:51.722 07:59:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:51.722 07:59:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:51.722 07:59:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:51.722 07:59:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:51.722 07:59:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:51.722 07:59:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:51.722 07:59:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:51.722 07:59:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.722 07:59:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.981 07:59:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:51.981 "name": "Existed_Raid", 00:13:51.981 "uuid": "69e276f7-7c2d-4604-a743-369364e677da", 00:13:51.981 "strip_size_kb": 64, 00:13:51.981 "state": "configuring", 00:13:51.981 "raid_level": "concat", 00:13:51.981 "superblock": true, 00:13:51.981 "num_base_bdevs": 4, 00:13:51.981 "num_base_bdevs_discovered": 0, 00:13:51.981 "num_base_bdevs_operational": 4, 00:13:51.981 "base_bdevs_list": [ 00:13:51.981 { 00:13:51.981 "name": "BaseBdev1", 00:13:51.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.981 "is_configured": false, 00:13:51.981 "data_offset": 0, 00:13:51.981 "data_size": 0 00:13:51.981 }, 00:13:51.981 { 00:13:51.981 "name": "BaseBdev2", 00:13:51.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.981 "is_configured": false, 00:13:51.981 "data_offset": 0, 00:13:51.981 "data_size": 0 00:13:51.981 }, 00:13:51.981 { 00:13:51.981 "name": "BaseBdev3", 00:13:51.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.981 "is_configured": false, 00:13:51.981 "data_offset": 0, 00:13:51.981 "data_size": 0 00:13:51.981 }, 00:13:51.981 { 00:13:51.981 "name": "BaseBdev4", 00:13:51.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.981 "is_configured": false, 00:13:51.981 "data_offset": 0, 00:13:51.981 "data_size": 0 00:13:51.981 } 00:13:51.981 ] 00:13:51.981 }' 00:13:51.981 07:59:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:51.981 07:59:57 -- common/autotest_common.sh@10 -- # set +x 00:13:52.550 07:59:58 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:52.809 [2024-07-13 07:59:58.497418] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:52.809 [2024-07-13 07:59:58.497452] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000025880 name Existed_Raid, state configuring 00:13:52.809 07:59:58 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:53.068 [2024-07-13 07:59:58.705504] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:53.068 [2024-07-13 07:59:58.705554] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:53.068 [2024-07-13 07:59:58.705564] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:53.068 [2024-07-13 07:59:58.705586] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:53.068 [2024-07-13 07:59:58.705594] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:53.068 [2024-07-13 07:59:58.705617] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:53.068 [2024-07-13 07:59:58.705624] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:53.068 [2024-07-13 07:59:58.705645] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:53.068 07:59:58 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:53.068 BaseBdev1 00:13:53.068 [2024-07-13 07:59:58.859564] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:53.068 07:59:58 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:53.068 07:59:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:53.068 07:59:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:53.068 07:59:58 -- common/autotest_common.sh@889 -- # local i 00:13:53.068 07:59:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:53.068 07:59:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:53.068 07:59:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:53.327 07:59:59 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:53.587 [ 00:13:53.587 { 00:13:53.587 "name": "BaseBdev1", 00:13:53.587 "aliases": [ 00:13:53.587 "4ce72e79-d23b-49fd-9f1e-84ac812609a1" 00:13:53.587 ], 00:13:53.587 "product_name": "Malloc disk", 00:13:53.587 "block_size": 512, 00:13:53.587 "num_blocks": 65536, 00:13:53.587 "uuid": "4ce72e79-d23b-49fd-9f1e-84ac812609a1", 00:13:53.587 "assigned_rate_limits": { 00:13:53.587 "rw_ios_per_sec": 0, 00:13:53.587 "rw_mbytes_per_sec": 0, 00:13:53.587 "r_mbytes_per_sec": 0, 00:13:53.587 "w_mbytes_per_sec": 0 00:13:53.587 }, 00:13:53.587 "claimed": true, 00:13:53.587 "claim_type": "exclusive_write", 00:13:53.587 "zoned": false, 00:13:53.587 "supported_io_types": { 00:13:53.587 "read": true, 00:13:53.587 "write": true, 00:13:53.587 "unmap": true, 00:13:53.587 "write_zeroes": true, 00:13:53.587 "flush": true, 00:13:53.587 "reset": true, 00:13:53.587 "compare": false, 00:13:53.587 "compare_and_write": false, 00:13:53.587 "abort": true, 00:13:53.587 "nvme_admin": false, 00:13:53.587 "nvme_io": false 00:13:53.587 }, 00:13:53.587 "memory_domains": [ 00:13:53.587 { 00:13:53.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.587 "dma_device_type": 2 00:13:53.587 } 00:13:53.587 ], 00:13:53.587 "driver_specific": {} 00:13:53.587 } 00:13:53.587 ] 00:13:53.587 07:59:59 -- common/autotest_common.sh@895 -- # return 0 00:13:53.587 07:59:59 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:53.587 07:59:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:53.587 07:59:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:53.587 07:59:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:53.587 07:59:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:53.587 07:59:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:53.587 07:59:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:53.587 07:59:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:53.587 07:59:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:53.587 07:59:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:53.587 07:59:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.587 07:59:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.587 07:59:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:53.587 "name": "Existed_Raid", 00:13:53.587 "uuid": "44f8e984-5006-43ac-b46b-525602972cd5", 00:13:53.587 "strip_size_kb": 64, 00:13:53.587 "state": "configuring", 00:13:53.587 "raid_level": "concat", 00:13:53.587 "superblock": true, 00:13:53.587 "num_base_bdevs": 4, 00:13:53.587 "num_base_bdevs_discovered": 1, 00:13:53.587 "num_base_bdevs_operational": 4, 00:13:53.587 "base_bdevs_list": [ 00:13:53.587 { 00:13:53.587 "name": "BaseBdev1", 00:13:53.587 "uuid": "4ce72e79-d23b-49fd-9f1e-84ac812609a1", 00:13:53.587 "is_configured": true, 00:13:53.587 "data_offset": 2048, 00:13:53.587 "data_size": 63488 00:13:53.587 }, 00:13:53.587 { 00:13:53.587 "name": "BaseBdev2", 00:13:53.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.587 "is_configured": false, 00:13:53.587 "data_offset": 0, 00:13:53.587 "data_size": 0 00:13:53.587 }, 00:13:53.587 { 00:13:53.587 "name": "BaseBdev3", 00:13:53.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.587 "is_configured": false, 00:13:53.587 "data_offset": 0, 00:13:53.587 "data_size": 0 00:13:53.587 }, 00:13:53.587 { 00:13:53.587 "name": "BaseBdev4", 00:13:53.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.587 "is_configured": false, 00:13:53.587 "data_offset": 0, 00:13:53.587 "data_size": 0 00:13:53.587 } 00:13:53.587 ] 00:13:53.587 }' 00:13:53.587 07:59:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:53.587 07:59:59 -- common/autotest_common.sh@10 -- # set +x 00:13:54.155 07:59:59 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:54.414 [2024-07-13 08:00:00.071711] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:54.414 [2024-07-13 08:00:00.071757] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:13:54.414 08:00:00 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:13:54.414 08:00:00 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:54.671 08:00:00 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:54.671 BaseBdev1 00:13:54.671 08:00:00 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:13:54.671 08:00:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:54.671 08:00:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:54.671 08:00:00 -- common/autotest_common.sh@889 -- # local i 00:13:54.671 08:00:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:54.671 08:00:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:54.671 08:00:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:54.931 08:00:00 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:54.931 [ 00:13:54.931 { 00:13:54.931 "name": "BaseBdev1", 00:13:54.931 "aliases": [ 00:13:54.931 "160a0b14-fb9a-43d1-9a8a-9ad24d04f521" 00:13:54.931 ], 00:13:54.931 "product_name": "Malloc disk", 00:13:54.931 "block_size": 512, 00:13:54.931 "num_blocks": 65536, 00:13:54.931 "uuid": "160a0b14-fb9a-43d1-9a8a-9ad24d04f521", 00:13:54.931 "assigned_rate_limits": { 00:13:54.931 "rw_ios_per_sec": 0, 00:13:54.931 "rw_mbytes_per_sec": 0, 00:13:54.931 "r_mbytes_per_sec": 0, 00:13:54.931 "w_mbytes_per_sec": 0 00:13:54.931 }, 00:13:54.931 "claimed": false, 00:13:54.931 "zoned": false, 00:13:54.931 "supported_io_types": { 00:13:54.931 "read": true, 00:13:54.931 "write": true, 00:13:54.931 "unmap": true, 00:13:54.931 "write_zeroes": true, 00:13:54.931 "flush": true, 00:13:54.931 "reset": true, 00:13:54.931 "compare": false, 00:13:54.931 "compare_and_write": false, 00:13:54.931 "abort": true, 00:13:54.931 "nvme_admin": false, 00:13:54.931 "nvme_io": false 00:13:54.931 }, 00:13:54.931 "memory_domains": [ 00:13:54.931 { 00:13:54.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.931 "dma_device_type": 2 00:13:54.931 } 00:13:54.931 ], 00:13:54.931 "driver_specific": {} 00:13:54.931 } 00:13:54.931 ] 00:13:54.931 08:00:00 -- common/autotest_common.sh@895 -- # return 0 00:13:54.931 08:00:00 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:55.189 [2024-07-13 08:00:00.879310] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.189 [2024-07-13 08:00:00.880724] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:55.189 [2024-07-13 08:00:00.880787] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:55.189 [2024-07-13 08:00:00.880798] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:55.189 [2024-07-13 08:00:00.880819] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:55.189 [2024-07-13 08:00:00.880827] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:55.189 [2024-07-13 08:00:00.880860] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:55.190 08:00:00 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:55.190 08:00:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:55.190 08:00:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:55.190 08:00:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:55.190 08:00:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:55.190 08:00:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:55.190 08:00:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:55.190 08:00:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:55.190 08:00:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:55.190 08:00:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:55.190 08:00:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:55.190 08:00:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:55.190 08:00:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.190 08:00:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.448 08:00:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:55.448 "name": "Existed_Raid", 00:13:55.448 "uuid": "a0ff85d8-8faa-4942-bf4a-981cdf9922a6", 00:13:55.448 "strip_size_kb": 64, 00:13:55.448 "state": "configuring", 00:13:55.448 "raid_level": "concat", 00:13:55.448 "superblock": true, 00:13:55.448 "num_base_bdevs": 4, 00:13:55.448 "num_base_bdevs_discovered": 1, 00:13:55.448 "num_base_bdevs_operational": 4, 00:13:55.448 "base_bdevs_list": [ 00:13:55.448 { 00:13:55.448 "name": "BaseBdev1", 00:13:55.448 "uuid": "160a0b14-fb9a-43d1-9a8a-9ad24d04f521", 00:13:55.448 "is_configured": true, 00:13:55.448 "data_offset": 2048, 00:13:55.448 "data_size": 63488 00:13:55.448 }, 00:13:55.448 { 00:13:55.448 "name": "BaseBdev2", 00:13:55.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.448 "is_configured": false, 00:13:55.448 "data_offset": 0, 00:13:55.448 "data_size": 0 00:13:55.448 }, 00:13:55.448 { 00:13:55.448 "name": "BaseBdev3", 00:13:55.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.448 "is_configured": false, 00:13:55.448 "data_offset": 0, 00:13:55.448 "data_size": 0 00:13:55.448 }, 00:13:55.448 { 00:13:55.448 "name": "BaseBdev4", 00:13:55.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.448 "is_configured": false, 00:13:55.448 "data_offset": 0, 00:13:55.448 "data_size": 0 00:13:55.448 } 00:13:55.448 ] 00:13:55.448 }' 00:13:55.448 08:00:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:55.448 08:00:01 -- common/autotest_common.sh@10 -- # set +x 00:13:56.013 08:00:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:56.270 BaseBdev2 00:13:56.270 [2024-07-13 08:00:01.850902] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:56.270 08:00:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:56.270 08:00:01 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:13:56.270 08:00:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:56.270 08:00:01 -- common/autotest_common.sh@889 -- # local i 00:13:56.270 08:00:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:56.270 08:00:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:56.270 08:00:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:56.270 08:00:01 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:56.527 [ 00:13:56.527 { 00:13:56.527 "name": "BaseBdev2", 00:13:56.527 "aliases": [ 00:13:56.527 "f1f87f77-210f-42ab-a027-67d4c050f9d5" 00:13:56.527 ], 00:13:56.527 "product_name": "Malloc disk", 00:13:56.527 "block_size": 512, 00:13:56.527 "num_blocks": 65536, 00:13:56.527 "uuid": "f1f87f77-210f-42ab-a027-67d4c050f9d5", 00:13:56.527 "assigned_rate_limits": { 00:13:56.527 "rw_ios_per_sec": 0, 00:13:56.527 "rw_mbytes_per_sec": 0, 00:13:56.527 "r_mbytes_per_sec": 0, 00:13:56.527 "w_mbytes_per_sec": 0 00:13:56.527 }, 00:13:56.527 "claimed": true, 00:13:56.527 "claim_type": "exclusive_write", 00:13:56.527 "zoned": false, 00:13:56.527 "supported_io_types": { 00:13:56.527 "read": true, 00:13:56.527 "write": true, 00:13:56.527 "unmap": true, 00:13:56.527 "write_zeroes": true, 00:13:56.527 "flush": true, 00:13:56.527 "reset": true, 00:13:56.527 "compare": false, 00:13:56.527 "compare_and_write": false, 00:13:56.527 "abort": true, 00:13:56.527 "nvme_admin": false, 00:13:56.527 "nvme_io": false 00:13:56.527 }, 00:13:56.527 "memory_domains": [ 00:13:56.527 { 00:13:56.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.527 "dma_device_type": 2 00:13:56.527 } 00:13:56.527 ], 00:13:56.527 "driver_specific": {} 00:13:56.527 } 00:13:56.527 ] 00:13:56.527 08:00:02 -- common/autotest_common.sh@895 -- # return 0 00:13:56.527 08:00:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:56.527 08:00:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:56.527 08:00:02 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:56.527 08:00:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:56.527 08:00:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:56.527 08:00:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:56.527 08:00:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:56.527 08:00:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:56.527 08:00:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:56.527 08:00:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:56.527 08:00:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:56.527 08:00:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:56.527 08:00:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.527 08:00:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.527 08:00:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:56.527 "name": "Existed_Raid", 00:13:56.527 "uuid": "a0ff85d8-8faa-4942-bf4a-981cdf9922a6", 00:13:56.527 "strip_size_kb": 64, 00:13:56.527 "state": "configuring", 00:13:56.527 "raid_level": "concat", 00:13:56.527 "superblock": true, 00:13:56.527 "num_base_bdevs": 4, 00:13:56.527 "num_base_bdevs_discovered": 2, 00:13:56.527 "num_base_bdevs_operational": 4, 00:13:56.527 "base_bdevs_list": [ 00:13:56.527 { 00:13:56.527 "name": "BaseBdev1", 00:13:56.527 "uuid": "160a0b14-fb9a-43d1-9a8a-9ad24d04f521", 00:13:56.527 "is_configured": true, 00:13:56.527 "data_offset": 2048, 00:13:56.527 "data_size": 63488 00:13:56.527 }, 00:13:56.527 { 00:13:56.527 "name": "BaseBdev2", 00:13:56.527 "uuid": "f1f87f77-210f-42ab-a027-67d4c050f9d5", 00:13:56.527 "is_configured": true, 00:13:56.527 "data_offset": 2048, 00:13:56.527 "data_size": 63488 00:13:56.527 }, 00:13:56.527 { 00:13:56.527 "name": "BaseBdev3", 00:13:56.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.527 "is_configured": false, 00:13:56.527 "data_offset": 0, 00:13:56.527 "data_size": 0 00:13:56.527 }, 00:13:56.527 { 00:13:56.527 "name": "BaseBdev4", 00:13:56.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.527 "is_configured": false, 00:13:56.527 "data_offset": 0, 00:13:56.527 "data_size": 0 00:13:56.527 } 00:13:56.527 ] 00:13:56.527 }' 00:13:56.527 08:00:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:56.527 08:00:02 -- common/autotest_common.sh@10 -- # set +x 00:13:57.092 08:00:02 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:57.349 BaseBdev3 00:13:57.349 [2024-07-13 08:00:03.098808] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.349 08:00:03 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:13:57.349 08:00:03 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:13:57.349 08:00:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:57.349 08:00:03 -- common/autotest_common.sh@889 -- # local i 00:13:57.349 08:00:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:57.349 08:00:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:57.349 08:00:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:57.608 08:00:03 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:57.865 [ 00:13:57.865 { 00:13:57.865 "name": "BaseBdev3", 00:13:57.865 "aliases": [ 00:13:57.865 "d3efda55-bd19-4c01-b39d-57472224e34c" 00:13:57.865 ], 00:13:57.865 "product_name": "Malloc disk", 00:13:57.865 "block_size": 512, 00:13:57.865 "num_blocks": 65536, 00:13:57.865 "uuid": "d3efda55-bd19-4c01-b39d-57472224e34c", 00:13:57.865 "assigned_rate_limits": { 00:13:57.865 "rw_ios_per_sec": 0, 00:13:57.865 "rw_mbytes_per_sec": 0, 00:13:57.865 "r_mbytes_per_sec": 0, 00:13:57.865 "w_mbytes_per_sec": 0 00:13:57.865 }, 00:13:57.865 "claimed": true, 00:13:57.865 "claim_type": "exclusive_write", 00:13:57.865 "zoned": false, 00:13:57.865 "supported_io_types": { 00:13:57.865 "read": true, 00:13:57.865 "write": true, 00:13:57.865 "unmap": true, 00:13:57.865 "write_zeroes": true, 00:13:57.865 "flush": true, 00:13:57.865 "reset": true, 00:13:57.865 "compare": false, 00:13:57.865 "compare_and_write": false, 00:13:57.865 "abort": true, 00:13:57.865 "nvme_admin": false, 00:13:57.865 "nvme_io": false 00:13:57.865 }, 00:13:57.865 "memory_domains": [ 00:13:57.865 { 00:13:57.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.865 "dma_device_type": 2 00:13:57.865 } 00:13:57.865 ], 00:13:57.865 "driver_specific": {} 00:13:57.865 } 00:13:57.865 ] 00:13:57.865 08:00:03 -- common/autotest_common.sh@895 -- # return 0 00:13:57.865 08:00:03 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:57.865 08:00:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:57.865 08:00:03 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:57.865 08:00:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:57.865 08:00:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:57.865 08:00:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:57.865 08:00:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:57.865 08:00:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:57.865 08:00:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:57.865 08:00:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:57.865 08:00:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:57.865 08:00:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:57.865 08:00:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.865 08:00:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.865 08:00:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:57.865 "name": "Existed_Raid", 00:13:57.865 "uuid": "a0ff85d8-8faa-4942-bf4a-981cdf9922a6", 00:13:57.865 "strip_size_kb": 64, 00:13:57.865 "state": "configuring", 00:13:57.865 "raid_level": "concat", 00:13:57.865 "superblock": true, 00:13:57.865 "num_base_bdevs": 4, 00:13:57.865 "num_base_bdevs_discovered": 3, 00:13:57.865 "num_base_bdevs_operational": 4, 00:13:57.865 "base_bdevs_list": [ 00:13:57.865 { 00:13:57.865 "name": "BaseBdev1", 00:13:57.865 "uuid": "160a0b14-fb9a-43d1-9a8a-9ad24d04f521", 00:13:57.865 "is_configured": true, 00:13:57.865 "data_offset": 2048, 00:13:57.865 "data_size": 63488 00:13:57.865 }, 00:13:57.865 { 00:13:57.865 "name": "BaseBdev2", 00:13:57.865 "uuid": "f1f87f77-210f-42ab-a027-67d4c050f9d5", 00:13:57.865 "is_configured": true, 00:13:57.865 "data_offset": 2048, 00:13:57.865 "data_size": 63488 00:13:57.865 }, 00:13:57.865 { 00:13:57.865 "name": "BaseBdev3", 00:13:57.865 "uuid": "d3efda55-bd19-4c01-b39d-57472224e34c", 00:13:57.865 "is_configured": true, 00:13:57.865 "data_offset": 2048, 00:13:57.865 "data_size": 63488 00:13:57.865 }, 00:13:57.865 { 00:13:57.865 "name": "BaseBdev4", 00:13:57.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.865 "is_configured": false, 00:13:57.865 "data_offset": 0, 00:13:57.865 "data_size": 0 00:13:57.865 } 00:13:57.865 ] 00:13:57.865 }' 00:13:57.865 08:00:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:57.865 08:00:03 -- common/autotest_common.sh@10 -- # set +x 00:13:58.800 08:00:04 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:58.800 BaseBdev4 00:13:58.800 [2024-07-13 08:00:04.458596] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:58.800 [2024-07-13 08:00:04.458713] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028280 00:13:58.800 [2024-07-13 08:00:04.458725] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:58.800 [2024-07-13 08:00:04.458801] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:13:58.800 [2024-07-13 08:00:04.458984] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028280 00:13:58.800 [2024-07-13 08:00:04.458994] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028280 00:13:58.800 [2024-07-13 08:00:04.459068] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.800 08:00:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:13:58.800 08:00:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:13:58.800 08:00:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:58.800 08:00:04 -- common/autotest_common.sh@889 -- # local i 00:13:58.800 08:00:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:58.800 08:00:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:58.800 08:00:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:59.059 08:00:04 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:59.059 [ 00:13:59.059 { 00:13:59.059 "name": "BaseBdev4", 00:13:59.059 "aliases": [ 00:13:59.059 "aea1f1cc-e62b-44a2-b6e7-5f578a27e0dc" 00:13:59.059 ], 00:13:59.059 "product_name": "Malloc disk", 00:13:59.059 "block_size": 512, 00:13:59.059 "num_blocks": 65536, 00:13:59.059 "uuid": "aea1f1cc-e62b-44a2-b6e7-5f578a27e0dc", 00:13:59.059 "assigned_rate_limits": { 00:13:59.059 "rw_ios_per_sec": 0, 00:13:59.059 "rw_mbytes_per_sec": 0, 00:13:59.059 "r_mbytes_per_sec": 0, 00:13:59.059 "w_mbytes_per_sec": 0 00:13:59.059 }, 00:13:59.059 "claimed": true, 00:13:59.059 "claim_type": "exclusive_write", 00:13:59.059 "zoned": false, 00:13:59.059 "supported_io_types": { 00:13:59.059 "read": true, 00:13:59.059 "write": true, 00:13:59.059 "unmap": true, 00:13:59.059 "write_zeroes": true, 00:13:59.059 "flush": true, 00:13:59.059 "reset": true, 00:13:59.059 "compare": false, 00:13:59.059 "compare_and_write": false, 00:13:59.059 "abort": true, 00:13:59.059 "nvme_admin": false, 00:13:59.059 "nvme_io": false 00:13:59.059 }, 00:13:59.059 "memory_domains": [ 00:13:59.059 { 00:13:59.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.059 "dma_device_type": 2 00:13:59.059 } 00:13:59.059 ], 00:13:59.059 "driver_specific": {} 00:13:59.059 } 00:13:59.059 ] 00:13:59.059 08:00:04 -- common/autotest_common.sh@895 -- # return 0 00:13:59.059 08:00:04 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:59.059 08:00:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:59.059 08:00:04 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:59.059 08:00:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:59.059 08:00:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:59.059 08:00:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:59.059 08:00:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:59.059 08:00:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:59.059 08:00:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:59.059 08:00:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:59.059 08:00:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:59.059 08:00:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:59.059 08:00:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.059 08:00:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.317 08:00:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:59.317 "name": "Existed_Raid", 00:13:59.317 "uuid": "a0ff85d8-8faa-4942-bf4a-981cdf9922a6", 00:13:59.317 "strip_size_kb": 64, 00:13:59.317 "state": "online", 00:13:59.317 "raid_level": "concat", 00:13:59.317 "superblock": true, 00:13:59.317 "num_base_bdevs": 4, 00:13:59.317 "num_base_bdevs_discovered": 4, 00:13:59.317 "num_base_bdevs_operational": 4, 00:13:59.317 "base_bdevs_list": [ 00:13:59.317 { 00:13:59.317 "name": "BaseBdev1", 00:13:59.317 "uuid": "160a0b14-fb9a-43d1-9a8a-9ad24d04f521", 00:13:59.317 "is_configured": true, 00:13:59.317 "data_offset": 2048, 00:13:59.317 "data_size": 63488 00:13:59.317 }, 00:13:59.317 { 00:13:59.317 "name": "BaseBdev2", 00:13:59.317 "uuid": "f1f87f77-210f-42ab-a027-67d4c050f9d5", 00:13:59.317 "is_configured": true, 00:13:59.317 "data_offset": 2048, 00:13:59.317 "data_size": 63488 00:13:59.317 }, 00:13:59.317 { 00:13:59.317 "name": "BaseBdev3", 00:13:59.317 "uuid": "d3efda55-bd19-4c01-b39d-57472224e34c", 00:13:59.317 "is_configured": true, 00:13:59.317 "data_offset": 2048, 00:13:59.317 "data_size": 63488 00:13:59.317 }, 00:13:59.317 { 00:13:59.317 "name": "BaseBdev4", 00:13:59.317 "uuid": "aea1f1cc-e62b-44a2-b6e7-5f578a27e0dc", 00:13:59.317 "is_configured": true, 00:13:59.317 "data_offset": 2048, 00:13:59.317 "data_size": 63488 00:13:59.317 } 00:13:59.317 ] 00:13:59.317 }' 00:13:59.317 08:00:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:59.317 08:00:04 -- common/autotest_common.sh@10 -- # set +x 00:13:59.896 08:00:05 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:00.154 [2024-07-13 08:00:05.930848] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:00.155 [2024-07-13 08:00:05.930873] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:00.155 [2024-07-13 08:00:05.930908] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.155 08:00:05 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:00.155 08:00:05 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:14:00.155 08:00:05 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:00.155 08:00:05 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:00.155 08:00:05 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:00.155 08:00:05 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:00.155 08:00:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:00.155 08:00:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:00.155 08:00:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:00.155 08:00:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:00.155 08:00:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:00.155 08:00:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:00.155 08:00:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:00.155 08:00:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:00.155 08:00:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:00.155 08:00:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.155 08:00:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.413 08:00:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:00.413 "name": "Existed_Raid", 00:14:00.413 "uuid": "a0ff85d8-8faa-4942-bf4a-981cdf9922a6", 00:14:00.413 "strip_size_kb": 64, 00:14:00.413 "state": "offline", 00:14:00.413 "raid_level": "concat", 00:14:00.413 "superblock": true, 00:14:00.413 "num_base_bdevs": 4, 00:14:00.413 "num_base_bdevs_discovered": 3, 00:14:00.413 "num_base_bdevs_operational": 3, 00:14:00.413 "base_bdevs_list": [ 00:14:00.413 { 00:14:00.413 "name": null, 00:14:00.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.413 "is_configured": false, 00:14:00.413 "data_offset": 2048, 00:14:00.413 "data_size": 63488 00:14:00.413 }, 00:14:00.413 { 00:14:00.413 "name": "BaseBdev2", 00:14:00.413 "uuid": "f1f87f77-210f-42ab-a027-67d4c050f9d5", 00:14:00.413 "is_configured": true, 00:14:00.413 "data_offset": 2048, 00:14:00.413 "data_size": 63488 00:14:00.413 }, 00:14:00.413 { 00:14:00.413 "name": "BaseBdev3", 00:14:00.413 "uuid": "d3efda55-bd19-4c01-b39d-57472224e34c", 00:14:00.413 "is_configured": true, 00:14:00.413 "data_offset": 2048, 00:14:00.413 "data_size": 63488 00:14:00.413 }, 00:14:00.413 { 00:14:00.413 "name": "BaseBdev4", 00:14:00.413 "uuid": "aea1f1cc-e62b-44a2-b6e7-5f578a27e0dc", 00:14:00.413 "is_configured": true, 00:14:00.413 "data_offset": 2048, 00:14:00.413 "data_size": 63488 00:14:00.413 } 00:14:00.413 ] 00:14:00.413 }' 00:14:00.413 08:00:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:00.413 08:00:06 -- common/autotest_common.sh@10 -- # set +x 00:14:00.979 08:00:06 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:00.979 08:00:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:00.979 08:00:06 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.979 08:00:06 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:01.236 08:00:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:01.236 08:00:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:01.236 08:00:06 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:01.493 [2024-07-13 08:00:07.220478] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:01.493 08:00:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:01.493 08:00:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:01.493 08:00:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:01.493 08:00:07 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.750 08:00:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:01.750 08:00:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:01.750 08:00:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:02.008 [2024-07-13 08:00:07.659140] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:02.008 08:00:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:02.008 08:00:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:02.008 08:00:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:02.008 08:00:07 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.266 08:00:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:02.266 08:00:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:02.266 08:00:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:14:02.266 [2024-07-13 08:00:08.045591] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:02.266 [2024-07-13 08:00:08.045632] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028280 name Existed_Raid, state offline 00:14:02.266 08:00:08 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:02.266 08:00:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:02.266 08:00:08 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:02.266 08:00:08 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.523 08:00:08 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:02.523 08:00:08 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:02.523 08:00:08 -- bdev/bdev_raid.sh@287 -- # killprocess 64928 00:14:02.523 08:00:08 -- common/autotest_common.sh@926 -- # '[' -z 64928 ']' 00:14:02.524 08:00:08 -- common/autotest_common.sh@930 -- # kill -0 64928 00:14:02.524 08:00:08 -- common/autotest_common.sh@931 -- # uname 00:14:02.524 08:00:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:02.524 08:00:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64928 00:14:02.524 killing process with pid 64928 00:14:02.524 08:00:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:02.524 08:00:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:02.524 08:00:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64928' 00:14:02.524 08:00:08 -- common/autotest_common.sh@945 -- # kill 64928 00:14:02.524 08:00:08 -- common/autotest_common.sh@950 -- # wait 64928 00:14:02.524 [2024-07-13 08:00:08.328263] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:02.524 [2024-07-13 08:00:08.328315] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:02.782 08:00:08 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:02.782 ************************************ 00:14:02.782 END TEST raid_state_function_test_sb 00:14:02.782 ************************************ 00:14:02.782 00:14:02.782 real 0m12.071s 00:14:02.782 user 0m22.249s 00:14:02.782 sys 0m1.600s 00:14:02.782 08:00:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:02.782 08:00:08 -- common/autotest_common.sh@10 -- # set +x 00:14:02.782 08:00:08 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:14:02.782 08:00:08 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:02.782 08:00:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:02.782 08:00:08 -- common/autotest_common.sh@10 -- # set +x 00:14:02.782 ************************************ 00:14:02.782 START TEST raid_superblock_test 00:14:02.782 ************************************ 00:14:02.782 08:00:08 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 4 00:14:02.782 08:00:08 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:14:02.782 08:00:08 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:14:02.782 08:00:08 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:02.782 08:00:08 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:02.782 08:00:08 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:02.782 08:00:08 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:02.782 08:00:08 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:02.782 08:00:08 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:02.782 08:00:08 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:02.782 08:00:08 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:02.782 08:00:08 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:02.782 08:00:08 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:02.782 08:00:08 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:02.782 08:00:08 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:14:02.782 08:00:08 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:02.782 08:00:08 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:02.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:02.782 08:00:08 -- bdev/bdev_raid.sh@357 -- # raid_pid=65351 00:14:02.782 08:00:08 -- bdev/bdev_raid.sh@358 -- # waitforlisten 65351 /var/tmp/spdk-raid.sock 00:14:02.782 08:00:08 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:02.782 08:00:08 -- common/autotest_common.sh@819 -- # '[' -z 65351 ']' 00:14:02.782 08:00:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:02.782 08:00:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:02.782 08:00:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:02.782 08:00:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:02.782 08:00:08 -- common/autotest_common.sh@10 -- # set +x 00:14:03.041 [2024-07-13 08:00:08.723616] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:03.041 [2024-07-13 08:00:08.723835] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65351 ] 00:14:03.299 [2024-07-13 08:00:08.857142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.299 [2024-07-13 08:00:08.902528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.299 [2024-07-13 08:00:08.947703] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.868 08:00:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:03.868 08:00:09 -- common/autotest_common.sh@852 -- # return 0 00:14:03.868 08:00:09 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:03.868 08:00:09 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:03.868 08:00:09 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:03.868 08:00:09 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:03.868 08:00:09 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:03.868 08:00:09 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:03.868 08:00:09 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:03.868 08:00:09 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:03.868 08:00:09 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:03.868 malloc1 00:14:03.868 08:00:09 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:04.136 [2024-07-13 08:00:09.775491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:04.136 [2024-07-13 08:00:09.775566] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.136 [2024-07-13 08:00:09.775613] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000026180 00:14:04.136 [2024-07-13 08:00:09.775650] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.136 [2024-07-13 08:00:09.777306] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.136 [2024-07-13 08:00:09.777358] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:04.136 pt1 00:14:04.136 08:00:09 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:04.136 08:00:09 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:04.136 08:00:09 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:04.136 08:00:09 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:04.136 08:00:09 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:04.136 08:00:09 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.136 08:00:09 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.136 08:00:09 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.136 08:00:09 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:04.136 malloc2 00:14:04.136 08:00:09 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:04.395 [2024-07-13 08:00:10.128349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:04.395 [2024-07-13 08:00:10.128419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.395 [2024-07-13 08:00:10.128647] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027f80 00:14:04.395 [2024-07-13 08:00:10.128698] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.395 pt2 00:14:04.395 [2024-07-13 08:00:10.130187] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.395 [2024-07-13 08:00:10.130225] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:04.395 08:00:10 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:04.395 08:00:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:04.395 08:00:10 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:14:04.395 08:00:10 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:14:04.395 08:00:10 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:04.395 08:00:10 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.395 08:00:10 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.395 08:00:10 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.395 08:00:10 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:04.652 malloc3 00:14:04.652 08:00:10 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:04.909 [2024-07-13 08:00:10.485172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:04.909 [2024-07-13 08:00:10.485238] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.909 [2024-07-13 08:00:10.485282] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029d80 00:14:04.909 [2024-07-13 08:00:10.485316] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.909 [2024-07-13 08:00:10.486960] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.909 [2024-07-13 08:00:10.487011] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:04.909 pt3 00:14:04.909 08:00:10 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:04.909 08:00:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:04.909 08:00:10 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:14:04.909 08:00:10 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:14:04.909 08:00:10 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:04.909 08:00:10 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.909 08:00:10 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.909 08:00:10 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.909 08:00:10 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:14:04.909 malloc4 00:14:04.909 08:00:10 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:05.167 [2024-07-13 08:00:10.818023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:05.167 [2024-07-13 08:00:10.818113] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.167 [2024-07-13 08:00:10.818152] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002bb80 00:14:05.167 [2024-07-13 08:00:10.818185] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.167 [2024-07-13 08:00:10.819724] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.167 [2024-07-13 08:00:10.819764] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:05.167 pt4 00:14:05.167 08:00:10 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:05.167 08:00:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:05.167 08:00:10 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:14:05.167 [2024-07-13 08:00:10.962130] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:05.167 [2024-07-13 08:00:10.963600] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:05.167 [2024-07-13 08:00:10.963640] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:05.167 [2024-07-13 08:00:10.963664] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:05.167 [2024-07-13 08:00:10.963759] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002d080 00:14:05.167 [2024-07-13 08:00:10.963770] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:05.167 [2024-07-13 08:00:10.963861] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:14:05.167 [2024-07-13 08:00:10.964045] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002d080 00:14:05.167 [2024-07-13 08:00:10.964056] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002d080 00:14:05.167 [2024-07-13 08:00:10.964119] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.167 08:00:10 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:05.167 08:00:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:05.168 08:00:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:05.168 08:00:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:05.168 08:00:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:05.168 08:00:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:05.168 08:00:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:05.168 08:00:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:05.168 08:00:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:05.168 08:00:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:05.168 08:00:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.426 08:00:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.426 08:00:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:05.426 "name": "raid_bdev1", 00:14:05.426 "uuid": "4b5cdce0-28ed-4440-a6b9-693ff536620e", 00:14:05.426 "strip_size_kb": 64, 00:14:05.426 "state": "online", 00:14:05.426 "raid_level": "concat", 00:14:05.426 "superblock": true, 00:14:05.426 "num_base_bdevs": 4, 00:14:05.426 "num_base_bdevs_discovered": 4, 00:14:05.426 "num_base_bdevs_operational": 4, 00:14:05.426 "base_bdevs_list": [ 00:14:05.426 { 00:14:05.426 "name": "pt1", 00:14:05.426 "uuid": "e413e005-14d3-5766-a3ba-8a0f24ef212c", 00:14:05.426 "is_configured": true, 00:14:05.426 "data_offset": 2048, 00:14:05.426 "data_size": 63488 00:14:05.426 }, 00:14:05.426 { 00:14:05.426 "name": "pt2", 00:14:05.426 "uuid": "bd14d3f4-161d-5634-85f6-e48665669734", 00:14:05.426 "is_configured": true, 00:14:05.426 "data_offset": 2048, 00:14:05.426 "data_size": 63488 00:14:05.426 }, 00:14:05.426 { 00:14:05.426 "name": "pt3", 00:14:05.426 "uuid": "de837d0f-e7eb-5609-9f5a-a1942462d751", 00:14:05.426 "is_configured": true, 00:14:05.426 "data_offset": 2048, 00:14:05.426 "data_size": 63488 00:14:05.426 }, 00:14:05.426 { 00:14:05.426 "name": "pt4", 00:14:05.426 "uuid": "ccdee302-aeab-58af-9b2e-1c7199a19519", 00:14:05.426 "is_configured": true, 00:14:05.426 "data_offset": 2048, 00:14:05.426 "data_size": 63488 00:14:05.426 } 00:14:05.426 ] 00:14:05.426 }' 00:14:05.426 08:00:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:05.426 08:00:11 -- common/autotest_common.sh@10 -- # set +x 00:14:05.994 08:00:11 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:05.994 08:00:11 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:06.253 [2024-07-13 08:00:11.938285] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.253 08:00:11 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=4b5cdce0-28ed-4440-a6b9-693ff536620e 00:14:06.253 08:00:11 -- bdev/bdev_raid.sh@380 -- # '[' -z 4b5cdce0-28ed-4440-a6b9-693ff536620e ']' 00:14:06.253 08:00:11 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:06.511 [2024-07-13 08:00:12.086139] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:06.511 [2024-07-13 08:00:12.086169] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.511 [2024-07-13 08:00:12.086243] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.511 [2024-07-13 08:00:12.086285] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.511 [2024-07-13 08:00:12.086294] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002d080 name raid_bdev1, state offline 00:14:06.511 08:00:12 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:06.511 08:00:12 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.769 08:00:12 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:06.769 08:00:12 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:06.769 08:00:12 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:06.769 08:00:12 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:06.769 08:00:12 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:06.769 08:00:12 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:07.027 08:00:12 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:07.027 08:00:12 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:07.027 08:00:12 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:07.027 08:00:12 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:14:07.286 08:00:12 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:07.286 08:00:12 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:07.545 08:00:13 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:07.545 08:00:13 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:14:07.545 08:00:13 -- common/autotest_common.sh@640 -- # local es=0 00:14:07.545 08:00:13 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:14:07.545 08:00:13 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:07.545 08:00:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:07.545 08:00:13 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:07.545 08:00:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:07.545 08:00:13 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:07.545 08:00:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:07.545 08:00:13 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:07.545 08:00:13 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:07.545 08:00:13 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:14:07.545 [2024-07-13 08:00:13.338266] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:07.545 [2024-07-13 08:00:13.339765] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:07.545 [2024-07-13 08:00:13.339798] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:07.545 [2024-07-13 08:00:13.339815] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:07.545 [2024-07-13 08:00:13.339842] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:07.545 [2024-07-13 08:00:13.339898] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:07.545 [2024-07-13 08:00:13.339925] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:14:07.545 [2024-07-13 08:00:13.339964] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:14:07.545 [2024-07-13 08:00:13.339984] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:07.545 [2024-07-13 08:00:13.339993] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002d680 name raid_bdev1, state configuring 00:14:07.545 request: 00:14:07.545 { 00:14:07.545 "name": "raid_bdev1", 00:14:07.545 "raid_level": "concat", 00:14:07.545 "base_bdevs": [ 00:14:07.545 "malloc1", 00:14:07.545 "malloc2", 00:14:07.545 "malloc3", 00:14:07.545 "malloc4" 00:14:07.545 ], 00:14:07.545 "superblock": false, 00:14:07.545 "strip_size_kb": 64, 00:14:07.545 "method": "bdev_raid_create", 00:14:07.545 "req_id": 1 00:14:07.545 } 00:14:07.545 Got JSON-RPC error response 00:14:07.546 response: 00:14:07.546 { 00:14:07.546 "code": -17, 00:14:07.546 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:07.546 } 00:14:07.546 08:00:13 -- common/autotest_common.sh@643 -- # es=1 00:14:07.546 08:00:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:07.546 08:00:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:07.546 08:00:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:07.546 08:00:13 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:07.546 08:00:13 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.806 08:00:13 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:07.806 08:00:13 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:07.806 08:00:13 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:08.063 [2024-07-13 08:00:13.638260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:08.063 [2024-07-13 08:00:13.638329] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.063 [2024-07-13 08:00:13.638380] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002eb80 00:14:08.063 [2024-07-13 08:00:13.638403] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.063 [2024-07-13 08:00:13.640105] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.063 [2024-07-13 08:00:13.640156] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:08.063 [2024-07-13 08:00:13.640228] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:08.063 [2024-07-13 08:00:13.640272] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:08.063 pt1 00:14:08.063 08:00:13 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:14:08.063 08:00:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:08.063 08:00:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:08.063 08:00:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:08.063 08:00:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:08.063 08:00:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:08.063 08:00:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:08.063 08:00:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:08.063 08:00:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:08.063 08:00:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:08.063 08:00:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.063 08:00:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.063 08:00:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:08.063 "name": "raid_bdev1", 00:14:08.063 "uuid": "4b5cdce0-28ed-4440-a6b9-693ff536620e", 00:14:08.063 "strip_size_kb": 64, 00:14:08.063 "state": "configuring", 00:14:08.063 "raid_level": "concat", 00:14:08.063 "superblock": true, 00:14:08.063 "num_base_bdevs": 4, 00:14:08.063 "num_base_bdevs_discovered": 1, 00:14:08.063 "num_base_bdevs_operational": 4, 00:14:08.063 "base_bdevs_list": [ 00:14:08.063 { 00:14:08.063 "name": "pt1", 00:14:08.063 "uuid": "e413e005-14d3-5766-a3ba-8a0f24ef212c", 00:14:08.063 "is_configured": true, 00:14:08.063 "data_offset": 2048, 00:14:08.063 "data_size": 63488 00:14:08.063 }, 00:14:08.063 { 00:14:08.063 "name": null, 00:14:08.063 "uuid": "bd14d3f4-161d-5634-85f6-e48665669734", 00:14:08.063 "is_configured": false, 00:14:08.063 "data_offset": 2048, 00:14:08.063 "data_size": 63488 00:14:08.063 }, 00:14:08.063 { 00:14:08.063 "name": null, 00:14:08.063 "uuid": "de837d0f-e7eb-5609-9f5a-a1942462d751", 00:14:08.063 "is_configured": false, 00:14:08.063 "data_offset": 2048, 00:14:08.063 "data_size": 63488 00:14:08.063 }, 00:14:08.063 { 00:14:08.063 "name": null, 00:14:08.063 "uuid": "ccdee302-aeab-58af-9b2e-1c7199a19519", 00:14:08.063 "is_configured": false, 00:14:08.063 "data_offset": 2048, 00:14:08.063 "data_size": 63488 00:14:08.063 } 00:14:08.063 ] 00:14:08.063 }' 00:14:08.063 08:00:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:08.063 08:00:13 -- common/autotest_common.sh@10 -- # set +x 00:14:08.629 08:00:14 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:14:08.629 08:00:14 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:08.886 [2024-07-13 08:00:14.614391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:08.886 [2024-07-13 08:00:14.614645] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.886 [2024-07-13 08:00:14.614731] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000030980 00:14:08.886 [2024-07-13 08:00:14.614757] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.886 [2024-07-13 08:00:14.615060] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.886 [2024-07-13 08:00:14.615094] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:08.886 [2024-07-13 08:00:14.615153] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:08.886 [2024-07-13 08:00:14.615175] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:08.886 pt2 00:14:08.886 08:00:14 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:09.144 [2024-07-13 08:00:14.762405] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:09.144 08:00:14 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:14:09.144 08:00:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:09.144 08:00:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:09.144 08:00:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:09.144 08:00:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:09.144 08:00:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:09.144 08:00:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:09.144 08:00:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:09.144 08:00:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:09.144 08:00:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:09.144 08:00:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.144 08:00:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.402 08:00:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:09.402 "name": "raid_bdev1", 00:14:09.402 "uuid": "4b5cdce0-28ed-4440-a6b9-693ff536620e", 00:14:09.402 "strip_size_kb": 64, 00:14:09.402 "state": "configuring", 00:14:09.402 "raid_level": "concat", 00:14:09.402 "superblock": true, 00:14:09.402 "num_base_bdevs": 4, 00:14:09.402 "num_base_bdevs_discovered": 1, 00:14:09.402 "num_base_bdevs_operational": 4, 00:14:09.402 "base_bdevs_list": [ 00:14:09.402 { 00:14:09.402 "name": "pt1", 00:14:09.402 "uuid": "e413e005-14d3-5766-a3ba-8a0f24ef212c", 00:14:09.403 "is_configured": true, 00:14:09.403 "data_offset": 2048, 00:14:09.403 "data_size": 63488 00:14:09.403 }, 00:14:09.403 { 00:14:09.403 "name": null, 00:14:09.403 "uuid": "bd14d3f4-161d-5634-85f6-e48665669734", 00:14:09.403 "is_configured": false, 00:14:09.403 "data_offset": 2048, 00:14:09.403 "data_size": 63488 00:14:09.403 }, 00:14:09.403 { 00:14:09.403 "name": null, 00:14:09.403 "uuid": "de837d0f-e7eb-5609-9f5a-a1942462d751", 00:14:09.403 "is_configured": false, 00:14:09.403 "data_offset": 2048, 00:14:09.403 "data_size": 63488 00:14:09.403 }, 00:14:09.403 { 00:14:09.403 "name": null, 00:14:09.403 "uuid": "ccdee302-aeab-58af-9b2e-1c7199a19519", 00:14:09.403 "is_configured": false, 00:14:09.403 "data_offset": 2048, 00:14:09.403 "data_size": 63488 00:14:09.403 } 00:14:09.403 ] 00:14:09.403 }' 00:14:09.403 08:00:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:09.403 08:00:14 -- common/autotest_common.sh@10 -- # set +x 00:14:09.968 08:00:15 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:09.968 08:00:15 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:09.968 08:00:15 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:09.968 [2024-07-13 08:00:15.710477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:09.968 [2024-07-13 08:00:15.710560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.968 [2024-07-13 08:00:15.710603] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031e80 00:14:09.968 [2024-07-13 08:00:15.710622] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.968 [2024-07-13 08:00:15.710897] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.968 [2024-07-13 08:00:15.710934] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:09.968 [2024-07-13 08:00:15.710986] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:09.968 [2024-07-13 08:00:15.711018] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:09.968 pt2 00:14:09.968 08:00:15 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:09.968 08:00:15 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:09.968 08:00:15 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:10.227 [2024-07-13 08:00:15.882537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:10.227 [2024-07-13 08:00:15.882616] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.227 [2024-07-13 08:00:15.882650] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000033380 00:14:10.227 [2024-07-13 08:00:15.882675] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.227 [2024-07-13 08:00:15.882922] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.227 [2024-07-13 08:00:15.882962] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:10.227 [2024-07-13 08:00:15.883009] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:14:10.227 [2024-07-13 08:00:15.883035] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:10.227 pt3 00:14:10.227 08:00:15 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:10.227 08:00:15 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:10.227 08:00:15 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:10.227 [2024-07-13 08:00:16.030559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:10.227 [2024-07-13 08:00:16.030635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.227 [2024-07-13 08:00:16.030675] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000034880 00:14:10.227 [2024-07-13 08:00:16.030705] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.227 [2024-07-13 08:00:16.030978] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.227 [2024-07-13 08:00:16.031017] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:10.227 [2024-07-13 08:00:16.031067] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:14:10.227 [2024-07-13 08:00:16.031087] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:10.227 [2024-07-13 08:00:16.031151] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000030380 00:14:10.227 [2024-07-13 08:00:16.031160] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:10.227 [2024-07-13 08:00:16.031214] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:14:10.227 [2024-07-13 08:00:16.031374] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000030380 00:14:10.227 [2024-07-13 08:00:16.031385] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000030380 00:14:10.227 [2024-07-13 08:00:16.031436] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.227 pt4 00:14:10.485 08:00:16 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:10.485 08:00:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:10.485 08:00:16 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:10.485 08:00:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:10.485 08:00:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:10.485 08:00:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:10.485 08:00:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:10.485 08:00:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:10.485 08:00:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:10.485 08:00:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:10.485 08:00:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:10.485 08:00:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:10.485 08:00:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.485 08:00:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.485 08:00:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:10.485 "name": "raid_bdev1", 00:14:10.485 "uuid": "4b5cdce0-28ed-4440-a6b9-693ff536620e", 00:14:10.485 "strip_size_kb": 64, 00:14:10.485 "state": "online", 00:14:10.485 "raid_level": "concat", 00:14:10.485 "superblock": true, 00:14:10.485 "num_base_bdevs": 4, 00:14:10.485 "num_base_bdevs_discovered": 4, 00:14:10.485 "num_base_bdevs_operational": 4, 00:14:10.485 "base_bdevs_list": [ 00:14:10.485 { 00:14:10.485 "name": "pt1", 00:14:10.485 "uuid": "e413e005-14d3-5766-a3ba-8a0f24ef212c", 00:14:10.485 "is_configured": true, 00:14:10.485 "data_offset": 2048, 00:14:10.485 "data_size": 63488 00:14:10.485 }, 00:14:10.485 { 00:14:10.485 "name": "pt2", 00:14:10.485 "uuid": "bd14d3f4-161d-5634-85f6-e48665669734", 00:14:10.485 "is_configured": true, 00:14:10.485 "data_offset": 2048, 00:14:10.485 "data_size": 63488 00:14:10.485 }, 00:14:10.485 { 00:14:10.485 "name": "pt3", 00:14:10.485 "uuid": "de837d0f-e7eb-5609-9f5a-a1942462d751", 00:14:10.485 "is_configured": true, 00:14:10.485 "data_offset": 2048, 00:14:10.485 "data_size": 63488 00:14:10.485 }, 00:14:10.485 { 00:14:10.485 "name": "pt4", 00:14:10.485 "uuid": "ccdee302-aeab-58af-9b2e-1c7199a19519", 00:14:10.485 "is_configured": true, 00:14:10.485 "data_offset": 2048, 00:14:10.485 "data_size": 63488 00:14:10.485 } 00:14:10.485 ] 00:14:10.485 }' 00:14:10.485 08:00:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:10.485 08:00:16 -- common/autotest_common.sh@10 -- # set +x 00:14:11.049 08:00:16 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:11.049 08:00:16 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:11.306 [2024-07-13 08:00:17.010780] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:11.306 08:00:17 -- bdev/bdev_raid.sh@430 -- # '[' 4b5cdce0-28ed-4440-a6b9-693ff536620e '!=' 4b5cdce0-28ed-4440-a6b9-693ff536620e ']' 00:14:11.306 08:00:17 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:14:11.306 08:00:17 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:11.306 08:00:17 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:11.306 08:00:17 -- bdev/bdev_raid.sh@511 -- # killprocess 65351 00:14:11.306 08:00:17 -- common/autotest_common.sh@926 -- # '[' -z 65351 ']' 00:14:11.306 08:00:17 -- common/autotest_common.sh@930 -- # kill -0 65351 00:14:11.306 08:00:17 -- common/autotest_common.sh@931 -- # uname 00:14:11.306 08:00:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:11.306 08:00:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65351 00:14:11.307 08:00:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:11.307 08:00:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:11.307 killing process with pid 65351 00:14:11.307 08:00:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65351' 00:14:11.307 08:00:17 -- common/autotest_common.sh@945 -- # kill 65351 00:14:11.307 [2024-07-13 08:00:17.052628] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:11.307 08:00:17 -- common/autotest_common.sh@950 -- # wait 65351 00:14:11.307 [2024-07-13 08:00:17.052693] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.307 [2024-07-13 08:00:17.052736] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:11.307 [2024-07-13 08:00:17.052745] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000030380 name raid_bdev1, state offline 00:14:11.307 [2024-07-13 08:00:17.092674] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:11.564 ************************************ 00:14:11.564 END TEST raid_superblock_test 00:14:11.564 ************************************ 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:11.564 00:14:11.564 real 0m8.700s 00:14:11.564 user 0m15.689s 00:14:11.564 sys 0m1.190s 00:14:11.564 08:00:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:11.564 08:00:17 -- common/autotest_common.sh@10 -- # set +x 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:14:11.564 08:00:17 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:11.564 08:00:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:11.564 08:00:17 -- common/autotest_common.sh@10 -- # set +x 00:14:11.564 ************************************ 00:14:11.564 START TEST raid_state_function_test 00:14:11.564 ************************************ 00:14:11.564 08:00:17 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 false 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:11.564 Process raid pid: 65649 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@226 -- # raid_pid=65649 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 65649' 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:11.564 08:00:17 -- bdev/bdev_raid.sh@228 -- # waitforlisten 65649 /var/tmp/spdk-raid.sock 00:14:11.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:11.564 08:00:17 -- common/autotest_common.sh@819 -- # '[' -z 65649 ']' 00:14:11.564 08:00:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:11.564 08:00:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:11.564 08:00:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:11.564 08:00:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:11.564 08:00:17 -- common/autotest_common.sh@10 -- # set +x 00:14:11.822 [2024-07-13 08:00:17.474447] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:11.822 [2024-07-13 08:00:17.474702] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.822 [2024-07-13 08:00:17.620364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.080 [2024-07-13 08:00:17.673939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.080 [2024-07-13 08:00:17.725496] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.338 08:00:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:12.338 08:00:18 -- common/autotest_common.sh@852 -- # return 0 00:14:12.338 08:00:18 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:12.596 [2024-07-13 08:00:18.335631] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:12.596 [2024-07-13 08:00:18.335694] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:12.596 [2024-07-13 08:00:18.335706] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:12.596 [2024-07-13 08:00:18.335727] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:12.596 [2024-07-13 08:00:18.335734] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:12.596 [2024-07-13 08:00:18.335768] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:12.596 [2024-07-13 08:00:18.335776] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:12.596 [2024-07-13 08:00:18.335796] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:12.596 08:00:18 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:12.596 08:00:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:12.596 08:00:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:12.596 08:00:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:12.596 08:00:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:12.596 08:00:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:12.596 08:00:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:12.596 08:00:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:12.596 08:00:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:12.596 08:00:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:12.596 08:00:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.596 08:00:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.854 08:00:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:12.854 "name": "Existed_Raid", 00:14:12.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.854 "strip_size_kb": 0, 00:14:12.854 "state": "configuring", 00:14:12.854 "raid_level": "raid1", 00:14:12.854 "superblock": false, 00:14:12.854 "num_base_bdevs": 4, 00:14:12.854 "num_base_bdevs_discovered": 0, 00:14:12.854 "num_base_bdevs_operational": 4, 00:14:12.854 "base_bdevs_list": [ 00:14:12.854 { 00:14:12.854 "name": "BaseBdev1", 00:14:12.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.854 "is_configured": false, 00:14:12.854 "data_offset": 0, 00:14:12.854 "data_size": 0 00:14:12.854 }, 00:14:12.854 { 00:14:12.854 "name": "BaseBdev2", 00:14:12.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.854 "is_configured": false, 00:14:12.854 "data_offset": 0, 00:14:12.854 "data_size": 0 00:14:12.854 }, 00:14:12.854 { 00:14:12.854 "name": "BaseBdev3", 00:14:12.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.854 "is_configured": false, 00:14:12.854 "data_offset": 0, 00:14:12.854 "data_size": 0 00:14:12.854 }, 00:14:12.854 { 00:14:12.854 "name": "BaseBdev4", 00:14:12.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.854 "is_configured": false, 00:14:12.854 "data_offset": 0, 00:14:12.854 "data_size": 0 00:14:12.854 } 00:14:12.854 ] 00:14:12.854 }' 00:14:12.854 08:00:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:12.854 08:00:18 -- common/autotest_common.sh@10 -- # set +x 00:14:13.422 08:00:19 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:13.680 [2024-07-13 08:00:19.259697] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:13.680 [2024-07-13 08:00:19.259733] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000025880 name Existed_Raid, state configuring 00:14:13.680 08:00:19 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:13.680 [2024-07-13 08:00:19.447744] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:13.680 [2024-07-13 08:00:19.447799] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:13.680 [2024-07-13 08:00:19.447809] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:13.680 [2024-07-13 08:00:19.447832] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:13.680 [2024-07-13 08:00:19.447840] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:13.680 [2024-07-13 08:00:19.447866] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:13.680 [2024-07-13 08:00:19.447874] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:13.680 [2024-07-13 08:00:19.447895] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:13.680 08:00:19 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:13.938 BaseBdev1 00:14:13.938 [2024-07-13 08:00:19.601501] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:13.938 08:00:19 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:13.938 08:00:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:13.938 08:00:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:13.938 08:00:19 -- common/autotest_common.sh@889 -- # local i 00:14:13.938 08:00:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:13.938 08:00:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:13.938 08:00:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:14.197 08:00:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:14.197 [ 00:14:14.197 { 00:14:14.197 "name": "BaseBdev1", 00:14:14.197 "aliases": [ 00:14:14.197 "9f3d6cc2-755f-47fa-b8ff-480687f287b1" 00:14:14.197 ], 00:14:14.197 "product_name": "Malloc disk", 00:14:14.197 "block_size": 512, 00:14:14.197 "num_blocks": 65536, 00:14:14.197 "uuid": "9f3d6cc2-755f-47fa-b8ff-480687f287b1", 00:14:14.197 "assigned_rate_limits": { 00:14:14.197 "rw_ios_per_sec": 0, 00:14:14.197 "rw_mbytes_per_sec": 0, 00:14:14.197 "r_mbytes_per_sec": 0, 00:14:14.197 "w_mbytes_per_sec": 0 00:14:14.197 }, 00:14:14.197 "claimed": true, 00:14:14.197 "claim_type": "exclusive_write", 00:14:14.197 "zoned": false, 00:14:14.197 "supported_io_types": { 00:14:14.197 "read": true, 00:14:14.197 "write": true, 00:14:14.197 "unmap": true, 00:14:14.197 "write_zeroes": true, 00:14:14.197 "flush": true, 00:14:14.197 "reset": true, 00:14:14.197 "compare": false, 00:14:14.197 "compare_and_write": false, 00:14:14.197 "abort": true, 00:14:14.197 "nvme_admin": false, 00:14:14.197 "nvme_io": false 00:14:14.197 }, 00:14:14.197 "memory_domains": [ 00:14:14.197 { 00:14:14.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.197 "dma_device_type": 2 00:14:14.197 } 00:14:14.197 ], 00:14:14.197 "driver_specific": {} 00:14:14.197 } 00:14:14.197 ] 00:14:14.197 08:00:19 -- common/autotest_common.sh@895 -- # return 0 00:14:14.197 08:00:19 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:14.197 08:00:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:14.197 08:00:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:14.197 08:00:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:14.197 08:00:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:14.197 08:00:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:14.197 08:00:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:14.197 08:00:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:14.197 08:00:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:14.197 08:00:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:14.197 08:00:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.197 08:00:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.456 08:00:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:14.456 "name": "Existed_Raid", 00:14:14.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.456 "strip_size_kb": 0, 00:14:14.456 "state": "configuring", 00:14:14.456 "raid_level": "raid1", 00:14:14.456 "superblock": false, 00:14:14.456 "num_base_bdevs": 4, 00:14:14.456 "num_base_bdevs_discovered": 1, 00:14:14.456 "num_base_bdevs_operational": 4, 00:14:14.456 "base_bdevs_list": [ 00:14:14.456 { 00:14:14.456 "name": "BaseBdev1", 00:14:14.456 "uuid": "9f3d6cc2-755f-47fa-b8ff-480687f287b1", 00:14:14.456 "is_configured": true, 00:14:14.456 "data_offset": 0, 00:14:14.456 "data_size": 65536 00:14:14.456 }, 00:14:14.456 { 00:14:14.456 "name": "BaseBdev2", 00:14:14.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.456 "is_configured": false, 00:14:14.456 "data_offset": 0, 00:14:14.456 "data_size": 0 00:14:14.456 }, 00:14:14.456 { 00:14:14.456 "name": "BaseBdev3", 00:14:14.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.456 "is_configured": false, 00:14:14.456 "data_offset": 0, 00:14:14.456 "data_size": 0 00:14:14.456 }, 00:14:14.456 { 00:14:14.456 "name": "BaseBdev4", 00:14:14.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.456 "is_configured": false, 00:14:14.456 "data_offset": 0, 00:14:14.456 "data_size": 0 00:14:14.456 } 00:14:14.456 ] 00:14:14.456 }' 00:14:14.456 08:00:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:14.456 08:00:20 -- common/autotest_common.sh@10 -- # set +x 00:14:15.023 08:00:20 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:15.282 [2024-07-13 08:00:20.857681] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:15.282 [2024-07-13 08:00:20.857724] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:14:15.282 08:00:20 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:15.282 08:00:20 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:15.282 [2024-07-13 08:00:20.997722] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.282 [2024-07-13 08:00:20.998950] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:15.282 [2024-07-13 08:00:20.999011] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:15.282 [2024-07-13 08:00:20.999021] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:15.282 [2024-07-13 08:00:20.999041] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:15.282 [2024-07-13 08:00:20.999049] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:15.282 [2024-07-13 08:00:20.999067] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:15.282 08:00:21 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:15.282 08:00:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:15.282 08:00:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:15.282 08:00:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:15.282 08:00:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:15.282 08:00:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:15.282 08:00:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:15.282 08:00:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:15.282 08:00:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:15.282 08:00:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:15.282 08:00:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:15.282 08:00:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:15.282 08:00:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.282 08:00:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.540 08:00:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:15.540 "name": "Existed_Raid", 00:14:15.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.540 "strip_size_kb": 0, 00:14:15.540 "state": "configuring", 00:14:15.540 "raid_level": "raid1", 00:14:15.540 "superblock": false, 00:14:15.540 "num_base_bdevs": 4, 00:14:15.540 "num_base_bdevs_discovered": 1, 00:14:15.540 "num_base_bdevs_operational": 4, 00:14:15.540 "base_bdevs_list": [ 00:14:15.540 { 00:14:15.540 "name": "BaseBdev1", 00:14:15.540 "uuid": "9f3d6cc2-755f-47fa-b8ff-480687f287b1", 00:14:15.540 "is_configured": true, 00:14:15.540 "data_offset": 0, 00:14:15.540 "data_size": 65536 00:14:15.540 }, 00:14:15.540 { 00:14:15.540 "name": "BaseBdev2", 00:14:15.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.540 "is_configured": false, 00:14:15.540 "data_offset": 0, 00:14:15.540 "data_size": 0 00:14:15.540 }, 00:14:15.540 { 00:14:15.540 "name": "BaseBdev3", 00:14:15.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.540 "is_configured": false, 00:14:15.540 "data_offset": 0, 00:14:15.540 "data_size": 0 00:14:15.540 }, 00:14:15.540 { 00:14:15.540 "name": "BaseBdev4", 00:14:15.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.540 "is_configured": false, 00:14:15.540 "data_offset": 0, 00:14:15.540 "data_size": 0 00:14:15.540 } 00:14:15.540 ] 00:14:15.540 }' 00:14:15.540 08:00:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:15.540 08:00:21 -- common/autotest_common.sh@10 -- # set +x 00:14:16.106 08:00:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:16.363 [2024-07-13 08:00:21.921351] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:16.363 BaseBdev2 00:14:16.363 08:00:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:16.363 08:00:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:16.363 08:00:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:16.363 08:00:21 -- common/autotest_common.sh@889 -- # local i 00:14:16.363 08:00:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:16.363 08:00:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:16.363 08:00:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:16.363 08:00:22 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:16.627 [ 00:14:16.627 { 00:14:16.627 "name": "BaseBdev2", 00:14:16.627 "aliases": [ 00:14:16.627 "a6d395b1-9a38-4ee2-8ea1-1189cbeef549" 00:14:16.627 ], 00:14:16.627 "product_name": "Malloc disk", 00:14:16.627 "block_size": 512, 00:14:16.627 "num_blocks": 65536, 00:14:16.627 "uuid": "a6d395b1-9a38-4ee2-8ea1-1189cbeef549", 00:14:16.627 "assigned_rate_limits": { 00:14:16.627 "rw_ios_per_sec": 0, 00:14:16.627 "rw_mbytes_per_sec": 0, 00:14:16.627 "r_mbytes_per_sec": 0, 00:14:16.627 "w_mbytes_per_sec": 0 00:14:16.627 }, 00:14:16.627 "claimed": true, 00:14:16.627 "claim_type": "exclusive_write", 00:14:16.627 "zoned": false, 00:14:16.627 "supported_io_types": { 00:14:16.627 "read": true, 00:14:16.627 "write": true, 00:14:16.627 "unmap": true, 00:14:16.627 "write_zeroes": true, 00:14:16.627 "flush": true, 00:14:16.627 "reset": true, 00:14:16.627 "compare": false, 00:14:16.627 "compare_and_write": false, 00:14:16.627 "abort": true, 00:14:16.627 "nvme_admin": false, 00:14:16.627 "nvme_io": false 00:14:16.627 }, 00:14:16.627 "memory_domains": [ 00:14:16.627 { 00:14:16.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.627 "dma_device_type": 2 00:14:16.627 } 00:14:16.627 ], 00:14:16.627 "driver_specific": {} 00:14:16.627 } 00:14:16.627 ] 00:14:16.627 08:00:22 -- common/autotest_common.sh@895 -- # return 0 00:14:16.627 08:00:22 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:16.627 08:00:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:16.627 08:00:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:16.627 08:00:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:16.627 08:00:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:16.627 08:00:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:16.627 08:00:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:16.627 08:00:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:16.627 08:00:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:16.627 08:00:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:16.627 08:00:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:16.627 08:00:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:16.627 08:00:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.627 08:00:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.906 08:00:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:16.906 "name": "Existed_Raid", 00:14:16.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.906 "strip_size_kb": 0, 00:14:16.906 "state": "configuring", 00:14:16.906 "raid_level": "raid1", 00:14:16.906 "superblock": false, 00:14:16.906 "num_base_bdevs": 4, 00:14:16.906 "num_base_bdevs_discovered": 2, 00:14:16.906 "num_base_bdevs_operational": 4, 00:14:16.906 "base_bdevs_list": [ 00:14:16.906 { 00:14:16.906 "name": "BaseBdev1", 00:14:16.906 "uuid": "9f3d6cc2-755f-47fa-b8ff-480687f287b1", 00:14:16.906 "is_configured": true, 00:14:16.906 "data_offset": 0, 00:14:16.906 "data_size": 65536 00:14:16.906 }, 00:14:16.906 { 00:14:16.906 "name": "BaseBdev2", 00:14:16.906 "uuid": "a6d395b1-9a38-4ee2-8ea1-1189cbeef549", 00:14:16.906 "is_configured": true, 00:14:16.906 "data_offset": 0, 00:14:16.906 "data_size": 65536 00:14:16.906 }, 00:14:16.906 { 00:14:16.906 "name": "BaseBdev3", 00:14:16.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.906 "is_configured": false, 00:14:16.906 "data_offset": 0, 00:14:16.906 "data_size": 0 00:14:16.906 }, 00:14:16.906 { 00:14:16.906 "name": "BaseBdev4", 00:14:16.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.906 "is_configured": false, 00:14:16.906 "data_offset": 0, 00:14:16.906 "data_size": 0 00:14:16.906 } 00:14:16.906 ] 00:14:16.906 }' 00:14:16.906 08:00:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:16.906 08:00:22 -- common/autotest_common.sh@10 -- # set +x 00:14:17.491 08:00:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:17.491 [2024-07-13 08:00:23.185097] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:17.491 BaseBdev3 00:14:17.491 08:00:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:14:17.491 08:00:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:14:17.491 08:00:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:17.491 08:00:23 -- common/autotest_common.sh@889 -- # local i 00:14:17.492 08:00:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:17.492 08:00:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:17.492 08:00:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:17.749 08:00:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:17.749 [ 00:14:17.750 { 00:14:17.750 "name": "BaseBdev3", 00:14:17.750 "aliases": [ 00:14:17.750 "3d11ef7a-1d93-48b0-a6bc-381f22f01af9" 00:14:17.750 ], 00:14:17.750 "product_name": "Malloc disk", 00:14:17.750 "block_size": 512, 00:14:17.750 "num_blocks": 65536, 00:14:17.750 "uuid": "3d11ef7a-1d93-48b0-a6bc-381f22f01af9", 00:14:17.750 "assigned_rate_limits": { 00:14:17.750 "rw_ios_per_sec": 0, 00:14:17.750 "rw_mbytes_per_sec": 0, 00:14:17.750 "r_mbytes_per_sec": 0, 00:14:17.750 "w_mbytes_per_sec": 0 00:14:17.750 }, 00:14:17.750 "claimed": true, 00:14:17.750 "claim_type": "exclusive_write", 00:14:17.750 "zoned": false, 00:14:17.750 "supported_io_types": { 00:14:17.750 "read": true, 00:14:17.750 "write": true, 00:14:17.750 "unmap": true, 00:14:17.750 "write_zeroes": true, 00:14:17.750 "flush": true, 00:14:17.750 "reset": true, 00:14:17.750 "compare": false, 00:14:17.750 "compare_and_write": false, 00:14:17.750 "abort": true, 00:14:17.750 "nvme_admin": false, 00:14:17.750 "nvme_io": false 00:14:17.750 }, 00:14:17.750 "memory_domains": [ 00:14:17.750 { 00:14:17.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.750 "dma_device_type": 2 00:14:17.750 } 00:14:17.750 ], 00:14:17.750 "driver_specific": {} 00:14:17.750 } 00:14:17.750 ] 00:14:17.750 08:00:23 -- common/autotest_common.sh@895 -- # return 0 00:14:17.750 08:00:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:17.750 08:00:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:17.750 08:00:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:17.750 08:00:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:17.750 08:00:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:17.750 08:00:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:17.750 08:00:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:17.750 08:00:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:17.750 08:00:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:17.750 08:00:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:17.750 08:00:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:17.750 08:00:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:17.750 08:00:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.750 08:00:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.008 08:00:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:18.008 "name": "Existed_Raid", 00:14:18.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.008 "strip_size_kb": 0, 00:14:18.008 "state": "configuring", 00:14:18.008 "raid_level": "raid1", 00:14:18.008 "superblock": false, 00:14:18.008 "num_base_bdevs": 4, 00:14:18.008 "num_base_bdevs_discovered": 3, 00:14:18.008 "num_base_bdevs_operational": 4, 00:14:18.008 "base_bdevs_list": [ 00:14:18.008 { 00:14:18.008 "name": "BaseBdev1", 00:14:18.008 "uuid": "9f3d6cc2-755f-47fa-b8ff-480687f287b1", 00:14:18.008 "is_configured": true, 00:14:18.008 "data_offset": 0, 00:14:18.008 "data_size": 65536 00:14:18.008 }, 00:14:18.008 { 00:14:18.008 "name": "BaseBdev2", 00:14:18.008 "uuid": "a6d395b1-9a38-4ee2-8ea1-1189cbeef549", 00:14:18.008 "is_configured": true, 00:14:18.008 "data_offset": 0, 00:14:18.009 "data_size": 65536 00:14:18.009 }, 00:14:18.009 { 00:14:18.009 "name": "BaseBdev3", 00:14:18.009 "uuid": "3d11ef7a-1d93-48b0-a6bc-381f22f01af9", 00:14:18.009 "is_configured": true, 00:14:18.009 "data_offset": 0, 00:14:18.009 "data_size": 65536 00:14:18.009 }, 00:14:18.009 { 00:14:18.009 "name": "BaseBdev4", 00:14:18.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.009 "is_configured": false, 00:14:18.009 "data_offset": 0, 00:14:18.009 "data_size": 0 00:14:18.009 } 00:14:18.009 ] 00:14:18.009 }' 00:14:18.009 08:00:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:18.009 08:00:23 -- common/autotest_common.sh@10 -- # set +x 00:14:18.576 08:00:24 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:18.841 [2024-07-13 08:00:24.444958] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:18.841 [2024-07-13 08:00:24.445015] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000027c80 00:14:18.841 [2024-07-13 08:00:24.445024] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:18.841 [2024-07-13 08:00:24.445112] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:14:18.841 [2024-07-13 08:00:24.445283] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000027c80 00:14:18.841 [2024-07-13 08:00:24.445293] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000027c80 00:14:18.841 [2024-07-13 08:00:24.445425] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.841 BaseBdev4 00:14:18.841 08:00:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:14:18.841 08:00:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:14:18.841 08:00:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:18.841 08:00:24 -- common/autotest_common.sh@889 -- # local i 00:14:18.841 08:00:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:18.841 08:00:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:18.841 08:00:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:18.841 08:00:24 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:19.102 [ 00:14:19.102 { 00:14:19.102 "name": "BaseBdev4", 00:14:19.102 "aliases": [ 00:14:19.102 "4ce4667b-5074-4974-9d45-63535009b814" 00:14:19.102 ], 00:14:19.102 "product_name": "Malloc disk", 00:14:19.102 "block_size": 512, 00:14:19.102 "num_blocks": 65536, 00:14:19.102 "uuid": "4ce4667b-5074-4974-9d45-63535009b814", 00:14:19.102 "assigned_rate_limits": { 00:14:19.102 "rw_ios_per_sec": 0, 00:14:19.102 "rw_mbytes_per_sec": 0, 00:14:19.102 "r_mbytes_per_sec": 0, 00:14:19.102 "w_mbytes_per_sec": 0 00:14:19.102 }, 00:14:19.102 "claimed": true, 00:14:19.102 "claim_type": "exclusive_write", 00:14:19.102 "zoned": false, 00:14:19.102 "supported_io_types": { 00:14:19.102 "read": true, 00:14:19.102 "write": true, 00:14:19.102 "unmap": true, 00:14:19.102 "write_zeroes": true, 00:14:19.102 "flush": true, 00:14:19.102 "reset": true, 00:14:19.102 "compare": false, 00:14:19.102 "compare_and_write": false, 00:14:19.102 "abort": true, 00:14:19.102 "nvme_admin": false, 00:14:19.102 "nvme_io": false 00:14:19.102 }, 00:14:19.102 "memory_domains": [ 00:14:19.102 { 00:14:19.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.102 "dma_device_type": 2 00:14:19.102 } 00:14:19.102 ], 00:14:19.102 "driver_specific": {} 00:14:19.102 } 00:14:19.102 ] 00:14:19.102 08:00:24 -- common/autotest_common.sh@895 -- # return 0 00:14:19.102 08:00:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:19.102 08:00:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:19.102 08:00:24 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:19.102 08:00:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:19.102 08:00:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:19.102 08:00:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:19.102 08:00:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:19.102 08:00:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:19.102 08:00:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:19.102 08:00:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:19.102 08:00:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:19.102 08:00:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:19.102 08:00:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.102 08:00:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.360 08:00:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:19.360 "name": "Existed_Raid", 00:14:19.360 "uuid": "debd2bcc-76c9-4415-89ce-3a10d9f36c4d", 00:14:19.360 "strip_size_kb": 0, 00:14:19.360 "state": "online", 00:14:19.360 "raid_level": "raid1", 00:14:19.360 "superblock": false, 00:14:19.360 "num_base_bdevs": 4, 00:14:19.360 "num_base_bdevs_discovered": 4, 00:14:19.360 "num_base_bdevs_operational": 4, 00:14:19.360 "base_bdevs_list": [ 00:14:19.360 { 00:14:19.360 "name": "BaseBdev1", 00:14:19.360 "uuid": "9f3d6cc2-755f-47fa-b8ff-480687f287b1", 00:14:19.360 "is_configured": true, 00:14:19.360 "data_offset": 0, 00:14:19.360 "data_size": 65536 00:14:19.360 }, 00:14:19.360 { 00:14:19.360 "name": "BaseBdev2", 00:14:19.360 "uuid": "a6d395b1-9a38-4ee2-8ea1-1189cbeef549", 00:14:19.360 "is_configured": true, 00:14:19.360 "data_offset": 0, 00:14:19.360 "data_size": 65536 00:14:19.360 }, 00:14:19.360 { 00:14:19.360 "name": "BaseBdev3", 00:14:19.360 "uuid": "3d11ef7a-1d93-48b0-a6bc-381f22f01af9", 00:14:19.360 "is_configured": true, 00:14:19.360 "data_offset": 0, 00:14:19.360 "data_size": 65536 00:14:19.360 }, 00:14:19.360 { 00:14:19.360 "name": "BaseBdev4", 00:14:19.360 "uuid": "4ce4667b-5074-4974-9d45-63535009b814", 00:14:19.360 "is_configured": true, 00:14:19.360 "data_offset": 0, 00:14:19.360 "data_size": 65536 00:14:19.360 } 00:14:19.360 ] 00:14:19.360 }' 00:14:19.360 08:00:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:19.360 08:00:24 -- common/autotest_common.sh@10 -- # set +x 00:14:19.929 08:00:25 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:19.929 [2024-07-13 08:00:25.697220] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:19.929 08:00:25 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:19.929 08:00:25 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:14:19.929 08:00:25 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:19.929 08:00:25 -- bdev/bdev_raid.sh@196 -- # return 0 00:14:19.929 08:00:25 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:14:19.929 08:00:25 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:19.929 08:00:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:19.929 08:00:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:19.929 08:00:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:19.929 08:00:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:19.929 08:00:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:19.929 08:00:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:19.929 08:00:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:19.929 08:00:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:19.929 08:00:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:19.929 08:00:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.929 08:00:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.186 08:00:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:20.186 "name": "Existed_Raid", 00:14:20.186 "uuid": "debd2bcc-76c9-4415-89ce-3a10d9f36c4d", 00:14:20.186 "strip_size_kb": 0, 00:14:20.186 "state": "online", 00:14:20.186 "raid_level": "raid1", 00:14:20.186 "superblock": false, 00:14:20.186 "num_base_bdevs": 4, 00:14:20.186 "num_base_bdevs_discovered": 3, 00:14:20.186 "num_base_bdevs_operational": 3, 00:14:20.186 "base_bdevs_list": [ 00:14:20.186 { 00:14:20.186 "name": null, 00:14:20.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.186 "is_configured": false, 00:14:20.186 "data_offset": 0, 00:14:20.186 "data_size": 65536 00:14:20.186 }, 00:14:20.186 { 00:14:20.186 "name": "BaseBdev2", 00:14:20.186 "uuid": "a6d395b1-9a38-4ee2-8ea1-1189cbeef549", 00:14:20.186 "is_configured": true, 00:14:20.186 "data_offset": 0, 00:14:20.186 "data_size": 65536 00:14:20.186 }, 00:14:20.186 { 00:14:20.186 "name": "BaseBdev3", 00:14:20.186 "uuid": "3d11ef7a-1d93-48b0-a6bc-381f22f01af9", 00:14:20.186 "is_configured": true, 00:14:20.186 "data_offset": 0, 00:14:20.186 "data_size": 65536 00:14:20.186 }, 00:14:20.186 { 00:14:20.186 "name": "BaseBdev4", 00:14:20.186 "uuid": "4ce4667b-5074-4974-9d45-63535009b814", 00:14:20.186 "is_configured": true, 00:14:20.186 "data_offset": 0, 00:14:20.186 "data_size": 65536 00:14:20.186 } 00:14:20.186 ] 00:14:20.186 }' 00:14:20.186 08:00:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:20.186 08:00:25 -- common/autotest_common.sh@10 -- # set +x 00:14:20.751 08:00:26 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:20.751 08:00:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:20.751 08:00:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:20.751 08:00:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.010 08:00:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:21.010 08:00:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:21.010 08:00:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:21.010 [2024-07-13 08:00:26.775854] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:21.010 08:00:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:21.010 08:00:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:21.010 08:00:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:21.010 08:00:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.267 08:00:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:21.267 08:00:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:21.267 08:00:27 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:21.533 [2024-07-13 08:00:27.238333] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:21.533 08:00:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:21.533 08:00:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:21.533 08:00:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:21.533 08:00:27 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.791 08:00:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:21.791 08:00:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:21.791 08:00:27 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:14:22.048 [2024-07-13 08:00:27.640783] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:22.048 [2024-07-13 08:00:27.640809] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:22.048 [2024-07-13 08:00:27.640844] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.048 [2024-07-13 08:00:27.651132] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:22.048 [2024-07-13 08:00:27.651159] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027c80 name Existed_Raid, state offline 00:14:22.048 08:00:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:22.048 08:00:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:22.048 08:00:27 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.048 08:00:27 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:22.306 08:00:27 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:22.306 08:00:27 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:22.306 08:00:27 -- bdev/bdev_raid.sh@287 -- # killprocess 65649 00:14:22.306 08:00:27 -- common/autotest_common.sh@926 -- # '[' -z 65649 ']' 00:14:22.306 08:00:27 -- common/autotest_common.sh@930 -- # kill -0 65649 00:14:22.306 08:00:27 -- common/autotest_common.sh@931 -- # uname 00:14:22.306 08:00:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:22.306 08:00:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65649 00:14:22.306 killing process with pid 65649 00:14:22.306 08:00:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:22.306 08:00:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:22.306 08:00:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65649' 00:14:22.306 08:00:27 -- common/autotest_common.sh@945 -- # kill 65649 00:14:22.306 [2024-07-13 08:00:27.915561] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:22.306 08:00:27 -- common/autotest_common.sh@950 -- # wait 65649 00:14:22.306 [2024-07-13 08:00:27.915606] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:22.307 ************************************ 00:14:22.307 END TEST raid_state_function_test 00:14:22.307 ************************************ 00:14:22.307 08:00:28 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:22.307 00:14:22.307 real 0m10.771s 00:14:22.307 user 0m19.868s 00:14:22.307 sys 0m1.416s 00:14:22.307 08:00:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:22.307 08:00:28 -- common/autotest_common.sh@10 -- # set +x 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:14:22.566 08:00:28 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:22.566 08:00:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:22.566 08:00:28 -- common/autotest_common.sh@10 -- # set +x 00:14:22.566 ************************************ 00:14:22.566 START TEST raid_state_function_test_sb 00:14:22.566 ************************************ 00:14:22.566 08:00:28 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 true 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:22.566 Process raid pid: 66046 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@226 -- # raid_pid=66046 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 66046' 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:22.566 08:00:28 -- bdev/bdev_raid.sh@228 -- # waitforlisten 66046 /var/tmp/spdk-raid.sock 00:14:22.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:22.566 08:00:28 -- common/autotest_common.sh@819 -- # '[' -z 66046 ']' 00:14:22.566 08:00:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:22.566 08:00:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:22.566 08:00:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:22.566 08:00:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:22.566 08:00:28 -- common/autotest_common.sh@10 -- # set +x 00:14:22.566 [2024-07-13 08:00:28.297431] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:22.566 [2024-07-13 08:00:28.297595] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.825 [2024-07-13 08:00:28.427340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.825 [2024-07-13 08:00:28.469700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.825 [2024-07-13 08:00:28.514093] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:23.391 08:00:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:23.391 08:00:29 -- common/autotest_common.sh@852 -- # return 0 00:14:23.391 08:00:29 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:23.649 [2024-07-13 08:00:29.291827] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:23.649 [2024-07-13 08:00:29.291887] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:23.649 [2024-07-13 08:00:29.291898] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:23.649 [2024-07-13 08:00:29.291916] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:23.649 [2024-07-13 08:00:29.291923] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:23.649 [2024-07-13 08:00:29.291953] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:23.649 [2024-07-13 08:00:29.291961] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:23.649 [2024-07-13 08:00:29.291979] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:23.649 08:00:29 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:23.649 08:00:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:23.649 08:00:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:23.649 08:00:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:23.649 08:00:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:23.649 08:00:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:23.649 08:00:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:23.649 08:00:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:23.649 08:00:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:23.649 08:00:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:23.649 08:00:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.649 08:00:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.907 08:00:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:23.907 "name": "Existed_Raid", 00:14:23.907 "uuid": "702f3cf0-688c-48c1-b69d-1ad61a31c327", 00:14:23.907 "strip_size_kb": 0, 00:14:23.907 "state": "configuring", 00:14:23.907 "raid_level": "raid1", 00:14:23.907 "superblock": true, 00:14:23.907 "num_base_bdevs": 4, 00:14:23.907 "num_base_bdevs_discovered": 0, 00:14:23.907 "num_base_bdevs_operational": 4, 00:14:23.907 "base_bdevs_list": [ 00:14:23.907 { 00:14:23.907 "name": "BaseBdev1", 00:14:23.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.907 "is_configured": false, 00:14:23.907 "data_offset": 0, 00:14:23.907 "data_size": 0 00:14:23.907 }, 00:14:23.907 { 00:14:23.907 "name": "BaseBdev2", 00:14:23.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.907 "is_configured": false, 00:14:23.907 "data_offset": 0, 00:14:23.907 "data_size": 0 00:14:23.907 }, 00:14:23.907 { 00:14:23.907 "name": "BaseBdev3", 00:14:23.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.907 "is_configured": false, 00:14:23.907 "data_offset": 0, 00:14:23.907 "data_size": 0 00:14:23.907 }, 00:14:23.907 { 00:14:23.907 "name": "BaseBdev4", 00:14:23.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.907 "is_configured": false, 00:14:23.907 "data_offset": 0, 00:14:23.907 "data_size": 0 00:14:23.907 } 00:14:23.907 ] 00:14:23.907 }' 00:14:23.907 08:00:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:23.907 08:00:29 -- common/autotest_common.sh@10 -- # set +x 00:14:24.474 08:00:30 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:24.474 [2024-07-13 08:00:30.199823] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:24.474 [2024-07-13 08:00:30.199859] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000025880 name Existed_Raid, state configuring 00:14:24.474 08:00:30 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:24.733 [2024-07-13 08:00:30.343904] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:24.733 [2024-07-13 08:00:30.343953] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:24.733 [2024-07-13 08:00:30.343962] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:24.733 [2024-07-13 08:00:30.343987] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:24.733 [2024-07-13 08:00:30.343995] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:24.733 [2024-07-13 08:00:30.344017] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:24.733 [2024-07-13 08:00:30.344025] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:24.733 [2024-07-13 08:00:30.344045] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:24.733 08:00:30 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:24.733 BaseBdev1 00:14:24.733 [2024-07-13 08:00:30.497574] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:24.733 08:00:30 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:24.733 08:00:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:24.733 08:00:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:24.733 08:00:30 -- common/autotest_common.sh@889 -- # local i 00:14:24.733 08:00:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:24.733 08:00:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:24.733 08:00:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:24.991 08:00:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:24.991 [ 00:14:24.991 { 00:14:24.991 "name": "BaseBdev1", 00:14:24.991 "aliases": [ 00:14:24.991 "094985c2-ca13-471d-853d-0fe90980eaf8" 00:14:24.991 ], 00:14:24.991 "product_name": "Malloc disk", 00:14:24.991 "block_size": 512, 00:14:24.991 "num_blocks": 65536, 00:14:24.991 "uuid": "094985c2-ca13-471d-853d-0fe90980eaf8", 00:14:24.991 "assigned_rate_limits": { 00:14:24.991 "rw_ios_per_sec": 0, 00:14:24.991 "rw_mbytes_per_sec": 0, 00:14:24.991 "r_mbytes_per_sec": 0, 00:14:24.991 "w_mbytes_per_sec": 0 00:14:24.991 }, 00:14:24.991 "claimed": true, 00:14:24.991 "claim_type": "exclusive_write", 00:14:24.991 "zoned": false, 00:14:24.991 "supported_io_types": { 00:14:24.991 "read": true, 00:14:24.991 "write": true, 00:14:24.991 "unmap": true, 00:14:24.991 "write_zeroes": true, 00:14:24.991 "flush": true, 00:14:24.991 "reset": true, 00:14:24.991 "compare": false, 00:14:24.991 "compare_and_write": false, 00:14:24.991 "abort": true, 00:14:24.991 "nvme_admin": false, 00:14:24.992 "nvme_io": false 00:14:24.992 }, 00:14:24.992 "memory_domains": [ 00:14:24.992 { 00:14:24.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.992 "dma_device_type": 2 00:14:24.992 } 00:14:24.992 ], 00:14:24.992 "driver_specific": {} 00:14:24.992 } 00:14:24.992 ] 00:14:24.992 08:00:30 -- common/autotest_common.sh@895 -- # return 0 00:14:24.992 08:00:30 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:24.992 08:00:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:24.992 08:00:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:24.992 08:00:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:24.992 08:00:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:24.992 08:00:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:24.992 08:00:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:24.992 08:00:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:24.992 08:00:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:24.992 08:00:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:24.992 08:00:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.992 08:00:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.249 08:00:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:25.249 "name": "Existed_Raid", 00:14:25.249 "uuid": "cc83c1b1-edf6-4967-8124-95791cefaa1e", 00:14:25.249 "strip_size_kb": 0, 00:14:25.249 "state": "configuring", 00:14:25.249 "raid_level": "raid1", 00:14:25.249 "superblock": true, 00:14:25.249 "num_base_bdevs": 4, 00:14:25.249 "num_base_bdevs_discovered": 1, 00:14:25.249 "num_base_bdevs_operational": 4, 00:14:25.249 "base_bdevs_list": [ 00:14:25.249 { 00:14:25.249 "name": "BaseBdev1", 00:14:25.249 "uuid": "094985c2-ca13-471d-853d-0fe90980eaf8", 00:14:25.249 "is_configured": true, 00:14:25.249 "data_offset": 2048, 00:14:25.249 "data_size": 63488 00:14:25.249 }, 00:14:25.249 { 00:14:25.249 "name": "BaseBdev2", 00:14:25.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.249 "is_configured": false, 00:14:25.249 "data_offset": 0, 00:14:25.249 "data_size": 0 00:14:25.249 }, 00:14:25.249 { 00:14:25.249 "name": "BaseBdev3", 00:14:25.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.249 "is_configured": false, 00:14:25.249 "data_offset": 0, 00:14:25.249 "data_size": 0 00:14:25.249 }, 00:14:25.249 { 00:14:25.249 "name": "BaseBdev4", 00:14:25.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.249 "is_configured": false, 00:14:25.249 "data_offset": 0, 00:14:25.249 "data_size": 0 00:14:25.250 } 00:14:25.250 ] 00:14:25.250 }' 00:14:25.250 08:00:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:25.250 08:00:30 -- common/autotest_common.sh@10 -- # set +x 00:14:26.183 08:00:31 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:26.183 [2024-07-13 08:00:31.849759] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:26.183 [2024-07-13 08:00:31.849813] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:14:26.183 08:00:31 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:26.183 08:00:31 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:26.442 08:00:32 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:26.442 BaseBdev1 00:14:26.442 08:00:32 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:26.442 08:00:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:26.442 08:00:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:26.442 08:00:32 -- common/autotest_common.sh@889 -- # local i 00:14:26.442 08:00:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:26.442 08:00:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:26.442 08:00:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:26.700 08:00:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:26.960 [ 00:14:26.960 { 00:14:26.960 "name": "BaseBdev1", 00:14:26.960 "aliases": [ 00:14:26.960 "46d1ab7c-231c-4ba3-be02-9ad3b213984a" 00:14:26.960 ], 00:14:26.960 "product_name": "Malloc disk", 00:14:26.960 "block_size": 512, 00:14:26.960 "num_blocks": 65536, 00:14:26.960 "uuid": "46d1ab7c-231c-4ba3-be02-9ad3b213984a", 00:14:26.960 "assigned_rate_limits": { 00:14:26.960 "rw_ios_per_sec": 0, 00:14:26.960 "rw_mbytes_per_sec": 0, 00:14:26.960 "r_mbytes_per_sec": 0, 00:14:26.960 "w_mbytes_per_sec": 0 00:14:26.960 }, 00:14:26.960 "claimed": false, 00:14:26.960 "zoned": false, 00:14:26.960 "supported_io_types": { 00:14:26.960 "read": true, 00:14:26.960 "write": true, 00:14:26.960 "unmap": true, 00:14:26.960 "write_zeroes": true, 00:14:26.960 "flush": true, 00:14:26.960 "reset": true, 00:14:26.960 "compare": false, 00:14:26.960 "compare_and_write": false, 00:14:26.960 "abort": true, 00:14:26.960 "nvme_admin": false, 00:14:26.960 "nvme_io": false 00:14:26.960 }, 00:14:26.960 "memory_domains": [ 00:14:26.960 { 00:14:26.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.960 "dma_device_type": 2 00:14:26.960 } 00:14:26.960 ], 00:14:26.960 "driver_specific": {} 00:14:26.960 } 00:14:26.960 ] 00:14:26.960 08:00:32 -- common/autotest_common.sh@895 -- # return 0 00:14:26.960 08:00:32 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:27.217 [2024-07-13 08:00:32.804409] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.217 [2024-07-13 08:00:32.805823] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:27.217 [2024-07-13 08:00:32.805890] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:27.217 [2024-07-13 08:00:32.805901] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:27.217 [2024-07-13 08:00:32.805921] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:27.217 [2024-07-13 08:00:32.805929] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:27.217 [2024-07-13 08:00:32.805946] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:27.217 08:00:32 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:27.217 08:00:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:27.217 08:00:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:27.217 08:00:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:27.217 08:00:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:27.217 08:00:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:27.217 08:00:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:27.217 08:00:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:27.217 08:00:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:27.217 08:00:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:27.217 08:00:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:27.217 08:00:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:27.217 08:00:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.217 08:00:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.217 08:00:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:27.217 "name": "Existed_Raid", 00:14:27.217 "uuid": "6065b3e0-bdf8-42ce-ba25-da8aac95e815", 00:14:27.217 "strip_size_kb": 0, 00:14:27.217 "state": "configuring", 00:14:27.217 "raid_level": "raid1", 00:14:27.217 "superblock": true, 00:14:27.217 "num_base_bdevs": 4, 00:14:27.217 "num_base_bdevs_discovered": 1, 00:14:27.217 "num_base_bdevs_operational": 4, 00:14:27.217 "base_bdevs_list": [ 00:14:27.217 { 00:14:27.217 "name": "BaseBdev1", 00:14:27.217 "uuid": "46d1ab7c-231c-4ba3-be02-9ad3b213984a", 00:14:27.217 "is_configured": true, 00:14:27.217 "data_offset": 2048, 00:14:27.217 "data_size": 63488 00:14:27.217 }, 00:14:27.217 { 00:14:27.217 "name": "BaseBdev2", 00:14:27.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.217 "is_configured": false, 00:14:27.217 "data_offset": 0, 00:14:27.217 "data_size": 0 00:14:27.217 }, 00:14:27.217 { 00:14:27.217 "name": "BaseBdev3", 00:14:27.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.217 "is_configured": false, 00:14:27.217 "data_offset": 0, 00:14:27.217 "data_size": 0 00:14:27.217 }, 00:14:27.217 { 00:14:27.217 "name": "BaseBdev4", 00:14:27.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.217 "is_configured": false, 00:14:27.217 "data_offset": 0, 00:14:27.217 "data_size": 0 00:14:27.217 } 00:14:27.217 ] 00:14:27.217 }' 00:14:27.217 08:00:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:27.217 08:00:33 -- common/autotest_common.sh@10 -- # set +x 00:14:27.793 08:00:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:28.050 BaseBdev2 00:14:28.050 [2024-07-13 08:00:33.659942] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:28.050 08:00:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:28.050 08:00:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:28.050 08:00:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:28.050 08:00:33 -- common/autotest_common.sh@889 -- # local i 00:14:28.050 08:00:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:28.050 08:00:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:28.050 08:00:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:28.308 08:00:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:28.308 [ 00:14:28.308 { 00:14:28.308 "name": "BaseBdev2", 00:14:28.308 "aliases": [ 00:14:28.308 "4fe5590d-3432-4ede-ae46-c7d0edcbecf9" 00:14:28.308 ], 00:14:28.308 "product_name": "Malloc disk", 00:14:28.308 "block_size": 512, 00:14:28.308 "num_blocks": 65536, 00:14:28.308 "uuid": "4fe5590d-3432-4ede-ae46-c7d0edcbecf9", 00:14:28.308 "assigned_rate_limits": { 00:14:28.308 "rw_ios_per_sec": 0, 00:14:28.308 "rw_mbytes_per_sec": 0, 00:14:28.308 "r_mbytes_per_sec": 0, 00:14:28.308 "w_mbytes_per_sec": 0 00:14:28.308 }, 00:14:28.308 "claimed": true, 00:14:28.308 "claim_type": "exclusive_write", 00:14:28.308 "zoned": false, 00:14:28.308 "supported_io_types": { 00:14:28.308 "read": true, 00:14:28.308 "write": true, 00:14:28.308 "unmap": true, 00:14:28.308 "write_zeroes": true, 00:14:28.308 "flush": true, 00:14:28.308 "reset": true, 00:14:28.308 "compare": false, 00:14:28.308 "compare_and_write": false, 00:14:28.308 "abort": true, 00:14:28.308 "nvme_admin": false, 00:14:28.308 "nvme_io": false 00:14:28.308 }, 00:14:28.308 "memory_domains": [ 00:14:28.308 { 00:14:28.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.308 "dma_device_type": 2 00:14:28.308 } 00:14:28.308 ], 00:14:28.308 "driver_specific": {} 00:14:28.308 } 00:14:28.308 ] 00:14:28.308 08:00:34 -- common/autotest_common.sh@895 -- # return 0 00:14:28.308 08:00:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:28.308 08:00:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:28.308 08:00:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:28.308 08:00:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:28.308 08:00:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:28.308 08:00:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:28.308 08:00:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:28.308 08:00:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:28.308 08:00:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:28.308 08:00:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:28.308 08:00:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:28.308 08:00:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:28.308 08:00:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.308 08:00:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.566 08:00:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:28.566 "name": "Existed_Raid", 00:14:28.566 "uuid": "6065b3e0-bdf8-42ce-ba25-da8aac95e815", 00:14:28.566 "strip_size_kb": 0, 00:14:28.566 "state": "configuring", 00:14:28.566 "raid_level": "raid1", 00:14:28.566 "superblock": true, 00:14:28.566 "num_base_bdevs": 4, 00:14:28.566 "num_base_bdevs_discovered": 2, 00:14:28.566 "num_base_bdevs_operational": 4, 00:14:28.566 "base_bdevs_list": [ 00:14:28.566 { 00:14:28.566 "name": "BaseBdev1", 00:14:28.566 "uuid": "46d1ab7c-231c-4ba3-be02-9ad3b213984a", 00:14:28.566 "is_configured": true, 00:14:28.566 "data_offset": 2048, 00:14:28.566 "data_size": 63488 00:14:28.566 }, 00:14:28.566 { 00:14:28.566 "name": "BaseBdev2", 00:14:28.566 "uuid": "4fe5590d-3432-4ede-ae46-c7d0edcbecf9", 00:14:28.566 "is_configured": true, 00:14:28.566 "data_offset": 2048, 00:14:28.566 "data_size": 63488 00:14:28.566 }, 00:14:28.566 { 00:14:28.566 "name": "BaseBdev3", 00:14:28.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.566 "is_configured": false, 00:14:28.566 "data_offset": 0, 00:14:28.566 "data_size": 0 00:14:28.566 }, 00:14:28.566 { 00:14:28.566 "name": "BaseBdev4", 00:14:28.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.566 "is_configured": false, 00:14:28.566 "data_offset": 0, 00:14:28.566 "data_size": 0 00:14:28.566 } 00:14:28.566 ] 00:14:28.566 }' 00:14:28.566 08:00:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:28.566 08:00:34 -- common/autotest_common.sh@10 -- # set +x 00:14:29.133 08:00:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:29.391 BaseBdev3 00:14:29.391 [2024-07-13 08:00:35.091750] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:29.391 08:00:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:14:29.391 08:00:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:14:29.391 08:00:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:29.391 08:00:35 -- common/autotest_common.sh@889 -- # local i 00:14:29.391 08:00:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:29.391 08:00:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:29.391 08:00:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:29.649 08:00:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:29.649 [ 00:14:29.649 { 00:14:29.649 "name": "BaseBdev3", 00:14:29.649 "aliases": [ 00:14:29.649 "d0c58b8c-d6c4-4786-8738-4185d985eb47" 00:14:29.649 ], 00:14:29.649 "product_name": "Malloc disk", 00:14:29.649 "block_size": 512, 00:14:29.649 "num_blocks": 65536, 00:14:29.649 "uuid": "d0c58b8c-d6c4-4786-8738-4185d985eb47", 00:14:29.649 "assigned_rate_limits": { 00:14:29.649 "rw_ios_per_sec": 0, 00:14:29.649 "rw_mbytes_per_sec": 0, 00:14:29.649 "r_mbytes_per_sec": 0, 00:14:29.649 "w_mbytes_per_sec": 0 00:14:29.649 }, 00:14:29.649 "claimed": true, 00:14:29.649 "claim_type": "exclusive_write", 00:14:29.649 "zoned": false, 00:14:29.649 "supported_io_types": { 00:14:29.649 "read": true, 00:14:29.649 "write": true, 00:14:29.649 "unmap": true, 00:14:29.649 "write_zeroes": true, 00:14:29.649 "flush": true, 00:14:29.649 "reset": true, 00:14:29.649 "compare": false, 00:14:29.649 "compare_and_write": false, 00:14:29.649 "abort": true, 00:14:29.649 "nvme_admin": false, 00:14:29.649 "nvme_io": false 00:14:29.649 }, 00:14:29.649 "memory_domains": [ 00:14:29.649 { 00:14:29.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.649 "dma_device_type": 2 00:14:29.649 } 00:14:29.649 ], 00:14:29.649 "driver_specific": {} 00:14:29.649 } 00:14:29.649 ] 00:14:29.649 08:00:35 -- common/autotest_common.sh@895 -- # return 0 00:14:29.649 08:00:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:29.649 08:00:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:29.649 08:00:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:29.649 08:00:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:29.649 08:00:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:29.649 08:00:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:29.649 08:00:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:29.649 08:00:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:29.649 08:00:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:29.649 08:00:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:29.649 08:00:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:29.649 08:00:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:29.649 08:00:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.907 08:00:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.907 08:00:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:29.907 "name": "Existed_Raid", 00:14:29.907 "uuid": "6065b3e0-bdf8-42ce-ba25-da8aac95e815", 00:14:29.907 "strip_size_kb": 0, 00:14:29.907 "state": "configuring", 00:14:29.907 "raid_level": "raid1", 00:14:29.907 "superblock": true, 00:14:29.907 "num_base_bdevs": 4, 00:14:29.907 "num_base_bdevs_discovered": 3, 00:14:29.907 "num_base_bdevs_operational": 4, 00:14:29.907 "base_bdevs_list": [ 00:14:29.907 { 00:14:29.907 "name": "BaseBdev1", 00:14:29.907 "uuid": "46d1ab7c-231c-4ba3-be02-9ad3b213984a", 00:14:29.907 "is_configured": true, 00:14:29.907 "data_offset": 2048, 00:14:29.907 "data_size": 63488 00:14:29.907 }, 00:14:29.907 { 00:14:29.907 "name": "BaseBdev2", 00:14:29.907 "uuid": "4fe5590d-3432-4ede-ae46-c7d0edcbecf9", 00:14:29.907 "is_configured": true, 00:14:29.907 "data_offset": 2048, 00:14:29.907 "data_size": 63488 00:14:29.907 }, 00:14:29.907 { 00:14:29.907 "name": "BaseBdev3", 00:14:29.907 "uuid": "d0c58b8c-d6c4-4786-8738-4185d985eb47", 00:14:29.907 "is_configured": true, 00:14:29.907 "data_offset": 2048, 00:14:29.907 "data_size": 63488 00:14:29.907 }, 00:14:29.907 { 00:14:29.907 "name": "BaseBdev4", 00:14:29.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.907 "is_configured": false, 00:14:29.907 "data_offset": 0, 00:14:29.907 "data_size": 0 00:14:29.907 } 00:14:29.907 ] 00:14:29.907 }' 00:14:29.907 08:00:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:29.907 08:00:35 -- common/autotest_common.sh@10 -- # set +x 00:14:30.474 08:00:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:30.732 [2024-07-13 08:00:36.307507] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:30.732 BaseBdev4 00:14:30.732 [2024-07-13 08:00:36.307629] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028280 00:14:30.732 [2024-07-13 08:00:36.307639] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:30.733 [2024-07-13 08:00:36.307708] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:14:30.733 [2024-07-13 08:00:36.307900] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028280 00:14:30.733 [2024-07-13 08:00:36.307910] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028280 00:14:30.733 [2024-07-13 08:00:36.307980] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.733 08:00:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:14:30.733 08:00:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:14:30.733 08:00:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:30.733 08:00:36 -- common/autotest_common.sh@889 -- # local i 00:14:30.733 08:00:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:30.733 08:00:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:30.733 08:00:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:30.733 08:00:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:30.990 [ 00:14:30.990 { 00:14:30.990 "name": "BaseBdev4", 00:14:30.990 "aliases": [ 00:14:30.990 "cec05e2c-4c19-4c54-b8b8-b2170ae89796" 00:14:30.990 ], 00:14:30.990 "product_name": "Malloc disk", 00:14:30.990 "block_size": 512, 00:14:30.990 "num_blocks": 65536, 00:14:30.990 "uuid": "cec05e2c-4c19-4c54-b8b8-b2170ae89796", 00:14:30.990 "assigned_rate_limits": { 00:14:30.990 "rw_ios_per_sec": 0, 00:14:30.990 "rw_mbytes_per_sec": 0, 00:14:30.990 "r_mbytes_per_sec": 0, 00:14:30.990 "w_mbytes_per_sec": 0 00:14:30.990 }, 00:14:30.990 "claimed": true, 00:14:30.990 "claim_type": "exclusive_write", 00:14:30.990 "zoned": false, 00:14:30.990 "supported_io_types": { 00:14:30.990 "read": true, 00:14:30.990 "write": true, 00:14:30.990 "unmap": true, 00:14:30.990 "write_zeroes": true, 00:14:30.990 "flush": true, 00:14:30.990 "reset": true, 00:14:30.990 "compare": false, 00:14:30.990 "compare_and_write": false, 00:14:30.990 "abort": true, 00:14:30.990 "nvme_admin": false, 00:14:30.990 "nvme_io": false 00:14:30.990 }, 00:14:30.990 "memory_domains": [ 00:14:30.990 { 00:14:30.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.990 "dma_device_type": 2 00:14:30.990 } 00:14:30.990 ], 00:14:30.990 "driver_specific": {} 00:14:30.990 } 00:14:30.990 ] 00:14:30.990 08:00:36 -- common/autotest_common.sh@895 -- # return 0 00:14:30.990 08:00:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:30.990 08:00:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:30.990 08:00:36 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:30.990 08:00:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:30.990 08:00:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:30.990 08:00:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:30.990 08:00:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:30.990 08:00:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:30.990 08:00:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:30.990 08:00:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:30.990 08:00:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:30.990 08:00:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:30.990 08:00:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.990 08:00:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.247 08:00:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:31.247 "name": "Existed_Raid", 00:14:31.247 "uuid": "6065b3e0-bdf8-42ce-ba25-da8aac95e815", 00:14:31.247 "strip_size_kb": 0, 00:14:31.247 "state": "online", 00:14:31.247 "raid_level": "raid1", 00:14:31.247 "superblock": true, 00:14:31.247 "num_base_bdevs": 4, 00:14:31.247 "num_base_bdevs_discovered": 4, 00:14:31.247 "num_base_bdevs_operational": 4, 00:14:31.247 "base_bdevs_list": [ 00:14:31.247 { 00:14:31.247 "name": "BaseBdev1", 00:14:31.247 "uuid": "46d1ab7c-231c-4ba3-be02-9ad3b213984a", 00:14:31.247 "is_configured": true, 00:14:31.247 "data_offset": 2048, 00:14:31.247 "data_size": 63488 00:14:31.247 }, 00:14:31.247 { 00:14:31.247 "name": "BaseBdev2", 00:14:31.247 "uuid": "4fe5590d-3432-4ede-ae46-c7d0edcbecf9", 00:14:31.247 "is_configured": true, 00:14:31.247 "data_offset": 2048, 00:14:31.247 "data_size": 63488 00:14:31.247 }, 00:14:31.247 { 00:14:31.247 "name": "BaseBdev3", 00:14:31.247 "uuid": "d0c58b8c-d6c4-4786-8738-4185d985eb47", 00:14:31.247 "is_configured": true, 00:14:31.247 "data_offset": 2048, 00:14:31.247 "data_size": 63488 00:14:31.247 }, 00:14:31.247 { 00:14:31.247 "name": "BaseBdev4", 00:14:31.247 "uuid": "cec05e2c-4c19-4c54-b8b8-b2170ae89796", 00:14:31.247 "is_configured": true, 00:14:31.247 "data_offset": 2048, 00:14:31.247 "data_size": 63488 00:14:31.247 } 00:14:31.247 ] 00:14:31.247 }' 00:14:31.247 08:00:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:31.247 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:14:31.814 08:00:37 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:32.076 [2024-07-13 08:00:37.751748] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:32.076 08:00:37 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:32.076 08:00:37 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:14:32.076 08:00:37 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:32.076 08:00:37 -- bdev/bdev_raid.sh@196 -- # return 0 00:14:32.076 08:00:37 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:14:32.076 08:00:37 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:32.076 08:00:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:32.077 08:00:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:32.077 08:00:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:32.077 08:00:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:32.077 08:00:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:32.077 08:00:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:32.077 08:00:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:32.077 08:00:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:32.077 08:00:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:32.077 08:00:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.077 08:00:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.337 08:00:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:32.337 "name": "Existed_Raid", 00:14:32.337 "uuid": "6065b3e0-bdf8-42ce-ba25-da8aac95e815", 00:14:32.337 "strip_size_kb": 0, 00:14:32.337 "state": "online", 00:14:32.337 "raid_level": "raid1", 00:14:32.337 "superblock": true, 00:14:32.337 "num_base_bdevs": 4, 00:14:32.337 "num_base_bdevs_discovered": 3, 00:14:32.337 "num_base_bdevs_operational": 3, 00:14:32.337 "base_bdevs_list": [ 00:14:32.337 { 00:14:32.337 "name": null, 00:14:32.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.337 "is_configured": false, 00:14:32.337 "data_offset": 2048, 00:14:32.337 "data_size": 63488 00:14:32.337 }, 00:14:32.337 { 00:14:32.337 "name": "BaseBdev2", 00:14:32.337 "uuid": "4fe5590d-3432-4ede-ae46-c7d0edcbecf9", 00:14:32.337 "is_configured": true, 00:14:32.337 "data_offset": 2048, 00:14:32.337 "data_size": 63488 00:14:32.337 }, 00:14:32.337 { 00:14:32.337 "name": "BaseBdev3", 00:14:32.337 "uuid": "d0c58b8c-d6c4-4786-8738-4185d985eb47", 00:14:32.337 "is_configured": true, 00:14:32.337 "data_offset": 2048, 00:14:32.337 "data_size": 63488 00:14:32.337 }, 00:14:32.337 { 00:14:32.337 "name": "BaseBdev4", 00:14:32.337 "uuid": "cec05e2c-4c19-4c54-b8b8-b2170ae89796", 00:14:32.337 "is_configured": true, 00:14:32.337 "data_offset": 2048, 00:14:32.337 "data_size": 63488 00:14:32.337 } 00:14:32.337 ] 00:14:32.337 }' 00:14:32.337 08:00:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:32.337 08:00:37 -- common/autotest_common.sh@10 -- # set +x 00:14:32.903 08:00:38 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:32.903 08:00:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:32.903 08:00:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:32.903 08:00:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.161 08:00:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:33.161 08:00:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:33.161 08:00:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:33.161 [2024-07-13 08:00:38.897098] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:33.161 08:00:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:33.161 08:00:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:33.161 08:00:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:33.161 08:00:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.420 08:00:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:33.420 08:00:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:33.420 08:00:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:33.678 [2024-07-13 08:00:39.279459] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:33.678 08:00:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:33.678 08:00:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:33.678 08:00:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.678 08:00:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:33.936 08:00:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:33.936 08:00:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:33.936 08:00:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:14:33.936 [2024-07-13 08:00:39.693839] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:33.936 [2024-07-13 08:00:39.693863] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:33.936 [2024-07-13 08:00:39.693904] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:33.936 [2024-07-13 08:00:39.704132] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:33.936 [2024-07-13 08:00:39.704158] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028280 name Existed_Raid, state offline 00:14:33.936 08:00:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:33.936 08:00:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:33.936 08:00:39 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:33.936 08:00:39 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.195 08:00:39 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:34.195 08:00:39 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:34.195 08:00:39 -- bdev/bdev_raid.sh@287 -- # killprocess 66046 00:14:34.195 08:00:39 -- common/autotest_common.sh@926 -- # '[' -z 66046 ']' 00:14:34.195 08:00:39 -- common/autotest_common.sh@930 -- # kill -0 66046 00:14:34.195 08:00:39 -- common/autotest_common.sh@931 -- # uname 00:14:34.195 08:00:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:34.195 08:00:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66046 00:14:34.195 killing process with pid 66046 00:14:34.195 08:00:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:34.195 08:00:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:34.195 08:00:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66046' 00:14:34.195 08:00:39 -- common/autotest_common.sh@945 -- # kill 66046 00:14:34.195 08:00:39 -- common/autotest_common.sh@950 -- # wait 66046 00:14:34.195 [2024-07-13 08:00:39.936123] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:34.195 [2024-07-13 08:00:39.936174] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:34.453 08:00:40 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:34.453 00:14:34.453 real 0m11.969s 00:14:34.453 user 0m22.180s 00:14:34.453 sys 0m1.514s 00:14:34.453 08:00:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.453 ************************************ 00:14:34.453 END TEST raid_state_function_test_sb 00:14:34.453 ************************************ 00:14:34.453 08:00:40 -- common/autotest_common.sh@10 -- # set +x 00:14:34.453 08:00:40 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:14:34.453 08:00:40 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:34.453 08:00:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:34.453 08:00:40 -- common/autotest_common.sh@10 -- # set +x 00:14:34.453 ************************************ 00:14:34.453 START TEST raid_superblock_test 00:14:34.453 ************************************ 00:14:34.453 08:00:40 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 4 00:14:34.453 08:00:40 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:14:34.453 08:00:40 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:14:34.453 08:00:40 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:34.453 08:00:40 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:34.453 08:00:40 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:34.453 08:00:40 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:34.453 08:00:40 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:34.453 08:00:40 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:34.453 08:00:40 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:34.453 08:00:40 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:34.453 08:00:40 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:34.453 08:00:40 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:34.453 08:00:40 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:34.453 08:00:40 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:14:34.453 08:00:40 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:14:34.453 08:00:40 -- bdev/bdev_raid.sh@357 -- # raid_pid=66468 00:14:34.453 08:00:40 -- bdev/bdev_raid.sh@358 -- # waitforlisten 66468 /var/tmp/spdk-raid.sock 00:14:34.453 08:00:40 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:34.453 08:00:40 -- common/autotest_common.sh@819 -- # '[' -z 66468 ']' 00:14:34.453 08:00:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:34.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:34.453 08:00:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:34.453 08:00:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:34.453 08:00:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:34.453 08:00:40 -- common/autotest_common.sh@10 -- # set +x 00:14:34.711 [2024-07-13 08:00:40.321106] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:34.711 [2024-07-13 08:00:40.321266] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66468 ] 00:14:34.711 [2024-07-13 08:00:40.451944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.711 [2024-07-13 08:00:40.494282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.988 [2024-07-13 08:00:40.538652] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.607 08:00:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:35.607 08:00:41 -- common/autotest_common.sh@852 -- # return 0 00:14:35.607 08:00:41 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:35.607 08:00:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:35.607 08:00:41 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:35.607 08:00:41 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:35.607 08:00:41 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:35.607 08:00:41 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:35.607 08:00:41 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:35.607 08:00:41 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:35.607 08:00:41 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:35.607 malloc1 00:14:35.607 08:00:41 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:35.865 [2024-07-13 08:00:41.566506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:35.865 [2024-07-13 08:00:41.566582] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.865 [2024-07-13 08:00:41.566625] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000026180 00:14:35.865 [2024-07-13 08:00:41.566664] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.865 [2024-07-13 08:00:41.568315] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.865 [2024-07-13 08:00:41.568359] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:35.865 pt1 00:14:35.865 08:00:41 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:35.865 08:00:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:35.865 08:00:41 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:35.865 08:00:41 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:35.865 08:00:41 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:35.865 08:00:41 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:35.865 08:00:41 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:35.865 08:00:41 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:35.865 08:00:41 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:36.124 malloc2 00:14:36.124 08:00:41 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:36.124 [2024-07-13 08:00:41.855318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:36.124 [2024-07-13 08:00:41.855375] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.124 [2024-07-13 08:00:41.855416] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027f80 00:14:36.124 [2024-07-13 08:00:41.855447] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.124 [2024-07-13 08:00:41.856922] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.124 [2024-07-13 08:00:41.856967] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:36.124 pt2 00:14:36.124 08:00:41 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:36.124 08:00:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:36.124 08:00:41 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:14:36.124 08:00:41 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:14:36.124 08:00:41 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:36.124 08:00:41 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:36.124 08:00:41 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:36.124 08:00:41 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:36.124 08:00:41 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:36.383 malloc3 00:14:36.383 08:00:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:36.383 [2024-07-13 08:00:42.144124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:36.383 [2024-07-13 08:00:42.144198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.383 [2024-07-13 08:00:42.144273] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029d80 00:14:36.383 [2024-07-13 08:00:42.144309] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.383 [2024-07-13 08:00:42.145925] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.383 [2024-07-13 08:00:42.145973] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:36.383 pt3 00:14:36.383 08:00:42 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:36.383 08:00:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:36.383 08:00:42 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:14:36.383 08:00:42 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:14:36.383 08:00:42 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:36.383 08:00:42 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:36.383 08:00:42 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:36.383 08:00:42 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:36.383 08:00:42 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:14:36.642 malloc4 00:14:36.642 08:00:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:36.900 [2024-07-13 08:00:42.501019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:36.900 [2024-07-13 08:00:42.501104] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.900 [2024-07-13 08:00:42.501142] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002bb80 00:14:36.900 [2024-07-13 08:00:42.501174] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.900 [2024-07-13 08:00:42.502746] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.900 [2024-07-13 08:00:42.502786] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:36.900 pt4 00:14:36.901 08:00:42 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:36.901 08:00:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:36.901 08:00:42 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:14:36.901 [2024-07-13 08:00:42.641116] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:36.901 [2024-07-13 08:00:42.642521] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:36.901 [2024-07-13 08:00:42.642566] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:36.901 [2024-07-13 08:00:42.642589] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:36.901 [2024-07-13 08:00:42.642677] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002d080 00:14:36.901 [2024-07-13 08:00:42.642687] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:36.901 [2024-07-13 08:00:42.642765] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:14:36.901 [2024-07-13 08:00:42.642968] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002d080 00:14:36.901 [2024-07-13 08:00:42.642977] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002d080 00:14:36.901 [2024-07-13 08:00:42.643051] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.901 08:00:42 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:36.901 08:00:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:36.901 08:00:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:36.901 08:00:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:36.901 08:00:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:36.901 08:00:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:36.901 08:00:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:36.901 08:00:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:36.901 08:00:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:36.901 08:00:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:36.901 08:00:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.901 08:00:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.159 08:00:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:37.159 "name": "raid_bdev1", 00:14:37.159 "uuid": "104d9cb1-ec77-4d55-b60f-987d7c5bb2cf", 00:14:37.159 "strip_size_kb": 0, 00:14:37.159 "state": "online", 00:14:37.159 "raid_level": "raid1", 00:14:37.159 "superblock": true, 00:14:37.159 "num_base_bdevs": 4, 00:14:37.159 "num_base_bdevs_discovered": 4, 00:14:37.159 "num_base_bdevs_operational": 4, 00:14:37.159 "base_bdevs_list": [ 00:14:37.159 { 00:14:37.159 "name": "pt1", 00:14:37.159 "uuid": "0e4253b4-0825-59df-bf23-119ec531bbca", 00:14:37.159 "is_configured": true, 00:14:37.159 "data_offset": 2048, 00:14:37.159 "data_size": 63488 00:14:37.159 }, 00:14:37.159 { 00:14:37.159 "name": "pt2", 00:14:37.159 "uuid": "5841b68f-d43c-5a56-ba33-92532510f7b1", 00:14:37.159 "is_configured": true, 00:14:37.159 "data_offset": 2048, 00:14:37.159 "data_size": 63488 00:14:37.159 }, 00:14:37.159 { 00:14:37.159 "name": "pt3", 00:14:37.159 "uuid": "3b2e3e41-ccb5-55a0-ba4b-bed24b6f3dc3", 00:14:37.159 "is_configured": true, 00:14:37.159 "data_offset": 2048, 00:14:37.159 "data_size": 63488 00:14:37.159 }, 00:14:37.159 { 00:14:37.159 "name": "pt4", 00:14:37.159 "uuid": "089f305e-9d89-5e22-8a69-cbb23e4ac43a", 00:14:37.159 "is_configured": true, 00:14:37.159 "data_offset": 2048, 00:14:37.159 "data_size": 63488 00:14:37.159 } 00:14:37.159 ] 00:14:37.159 }' 00:14:37.159 08:00:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:37.159 08:00:42 -- common/autotest_common.sh@10 -- # set +x 00:14:37.726 08:00:43 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:37.726 08:00:43 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:37.985 [2024-07-13 08:00:43.613303] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.985 08:00:43 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=104d9cb1-ec77-4d55-b60f-987d7c5bb2cf 00:14:37.985 08:00:43 -- bdev/bdev_raid.sh@380 -- # '[' -z 104d9cb1-ec77-4d55-b60f-987d7c5bb2cf ']' 00:14:37.985 08:00:43 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:37.985 [2024-07-13 08:00:43.761116] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:37.985 [2024-07-13 08:00:43.761138] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.985 [2024-07-13 08:00:43.761203] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.985 [2024-07-13 08:00:43.761248] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.985 [2024-07-13 08:00:43.761256] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002d080 name raid_bdev1, state offline 00:14:37.985 08:00:43 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:37.985 08:00:43 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.244 08:00:43 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:38.244 08:00:43 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:38.244 08:00:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:38.244 08:00:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:38.503 08:00:44 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:38.503 08:00:44 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:38.772 08:00:44 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:38.772 08:00:44 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:38.772 08:00:44 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:38.772 08:00:44 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:14:39.034 08:00:44 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:39.034 08:00:44 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:39.292 08:00:44 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:39.292 08:00:44 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:14:39.292 08:00:44 -- common/autotest_common.sh@640 -- # local es=0 00:14:39.292 08:00:44 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:14:39.292 08:00:44 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:39.292 08:00:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:39.292 08:00:44 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:39.292 08:00:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:39.292 08:00:44 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:39.292 08:00:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:39.292 08:00:44 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:39.292 08:00:44 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:39.292 08:00:44 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:14:39.292 [2024-07-13 08:00:45.001301] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:39.292 [2024-07-13 08:00:45.002563] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:39.292 [2024-07-13 08:00:45.002600] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:39.292 [2024-07-13 08:00:45.002619] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:39.292 [2024-07-13 08:00:45.002646] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:39.292 [2024-07-13 08:00:45.002700] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:39.292 [2024-07-13 08:00:45.002726] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:14:39.292 [2024-07-13 08:00:45.002765] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:14:39.292 [2024-07-13 08:00:45.002784] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:39.292 [2024-07-13 08:00:45.002793] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002d680 name raid_bdev1, state configuring 00:14:39.292 request: 00:14:39.292 { 00:14:39.292 "name": "raid_bdev1", 00:14:39.292 "raid_level": "raid1", 00:14:39.292 "base_bdevs": [ 00:14:39.292 "malloc1", 00:14:39.292 "malloc2", 00:14:39.292 "malloc3", 00:14:39.292 "malloc4" 00:14:39.292 ], 00:14:39.292 "superblock": false, 00:14:39.292 "method": "bdev_raid_create", 00:14:39.292 "req_id": 1 00:14:39.292 } 00:14:39.292 Got JSON-RPC error response 00:14:39.292 response: 00:14:39.292 { 00:14:39.292 "code": -17, 00:14:39.292 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:39.292 } 00:14:39.292 08:00:45 -- common/autotest_common.sh@643 -- # es=1 00:14:39.292 08:00:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:39.292 08:00:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:39.292 08:00:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:39.292 08:00:45 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:39.292 08:00:45 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.549 08:00:45 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:39.549 08:00:45 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:39.549 08:00:45 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:39.807 [2024-07-13 08:00:45.373306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:39.807 [2024-07-13 08:00:45.373409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.807 [2024-07-13 08:00:45.373450] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002eb80 00:14:39.807 [2024-07-13 08:00:45.373483] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.807 [2024-07-13 08:00:45.374881] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.807 [2024-07-13 08:00:45.374932] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:39.807 [2024-07-13 08:00:45.375006] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:39.807 [2024-07-13 08:00:45.375057] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:39.807 pt1 00:14:39.807 08:00:45 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:14:39.807 08:00:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:39.807 08:00:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:39.807 08:00:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:39.807 08:00:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:39.807 08:00:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:39.807 08:00:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:39.807 08:00:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:39.807 08:00:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:39.807 08:00:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:39.807 08:00:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.807 08:00:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.808 08:00:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:39.808 "name": "raid_bdev1", 00:14:39.808 "uuid": "104d9cb1-ec77-4d55-b60f-987d7c5bb2cf", 00:14:39.808 "strip_size_kb": 0, 00:14:39.808 "state": "configuring", 00:14:39.808 "raid_level": "raid1", 00:14:39.808 "superblock": true, 00:14:39.808 "num_base_bdevs": 4, 00:14:39.808 "num_base_bdevs_discovered": 1, 00:14:39.808 "num_base_bdevs_operational": 4, 00:14:39.808 "base_bdevs_list": [ 00:14:39.808 { 00:14:39.808 "name": "pt1", 00:14:39.808 "uuid": "0e4253b4-0825-59df-bf23-119ec531bbca", 00:14:39.808 "is_configured": true, 00:14:39.808 "data_offset": 2048, 00:14:39.808 "data_size": 63488 00:14:39.808 }, 00:14:39.808 { 00:14:39.808 "name": null, 00:14:39.808 "uuid": "5841b68f-d43c-5a56-ba33-92532510f7b1", 00:14:39.808 "is_configured": false, 00:14:39.808 "data_offset": 2048, 00:14:39.808 "data_size": 63488 00:14:39.808 }, 00:14:39.808 { 00:14:39.808 "name": null, 00:14:39.808 "uuid": "3b2e3e41-ccb5-55a0-ba4b-bed24b6f3dc3", 00:14:39.808 "is_configured": false, 00:14:39.808 "data_offset": 2048, 00:14:39.808 "data_size": 63488 00:14:39.808 }, 00:14:39.808 { 00:14:39.808 "name": null, 00:14:39.808 "uuid": "089f305e-9d89-5e22-8a69-cbb23e4ac43a", 00:14:39.808 "is_configured": false, 00:14:39.808 "data_offset": 2048, 00:14:39.808 "data_size": 63488 00:14:39.808 } 00:14:39.808 ] 00:14:39.808 }' 00:14:39.808 08:00:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:39.808 08:00:45 -- common/autotest_common.sh@10 -- # set +x 00:14:40.384 08:00:46 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:14:40.384 08:00:46 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:40.642 [2024-07-13 08:00:46.365404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:40.642 [2024-07-13 08:00:46.365498] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.642 [2024-07-13 08:00:46.365542] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000030980 00:14:40.642 [2024-07-13 08:00:46.365564] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.642 [2024-07-13 08:00:46.365833] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.642 [2024-07-13 08:00:46.365867] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:40.642 [2024-07-13 08:00:46.365922] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:40.642 [2024-07-13 08:00:46.365942] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:40.642 pt2 00:14:40.642 08:00:46 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:40.898 [2024-07-13 08:00:46.573488] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:40.898 08:00:46 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:14:40.899 08:00:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:40.899 08:00:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:40.899 08:00:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:40.899 08:00:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:40.899 08:00:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:40.899 08:00:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:40.899 08:00:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:40.899 08:00:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:40.899 08:00:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:40.899 08:00:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.899 08:00:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.156 08:00:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:41.156 "name": "raid_bdev1", 00:14:41.156 "uuid": "104d9cb1-ec77-4d55-b60f-987d7c5bb2cf", 00:14:41.156 "strip_size_kb": 0, 00:14:41.156 "state": "configuring", 00:14:41.156 "raid_level": "raid1", 00:14:41.156 "superblock": true, 00:14:41.156 "num_base_bdevs": 4, 00:14:41.156 "num_base_bdevs_discovered": 1, 00:14:41.156 "num_base_bdevs_operational": 4, 00:14:41.156 "base_bdevs_list": [ 00:14:41.156 { 00:14:41.156 "name": "pt1", 00:14:41.156 "uuid": "0e4253b4-0825-59df-bf23-119ec531bbca", 00:14:41.156 "is_configured": true, 00:14:41.156 "data_offset": 2048, 00:14:41.156 "data_size": 63488 00:14:41.156 }, 00:14:41.156 { 00:14:41.156 "name": null, 00:14:41.156 "uuid": "5841b68f-d43c-5a56-ba33-92532510f7b1", 00:14:41.156 "is_configured": false, 00:14:41.156 "data_offset": 2048, 00:14:41.156 "data_size": 63488 00:14:41.156 }, 00:14:41.156 { 00:14:41.156 "name": null, 00:14:41.156 "uuid": "3b2e3e41-ccb5-55a0-ba4b-bed24b6f3dc3", 00:14:41.156 "is_configured": false, 00:14:41.156 "data_offset": 2048, 00:14:41.157 "data_size": 63488 00:14:41.157 }, 00:14:41.157 { 00:14:41.157 "name": null, 00:14:41.157 "uuid": "089f305e-9d89-5e22-8a69-cbb23e4ac43a", 00:14:41.157 "is_configured": false, 00:14:41.157 "data_offset": 2048, 00:14:41.157 "data_size": 63488 00:14:41.157 } 00:14:41.157 ] 00:14:41.157 }' 00:14:41.157 08:00:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:41.157 08:00:46 -- common/autotest_common.sh@10 -- # set +x 00:14:41.721 08:00:47 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:41.721 08:00:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:41.721 08:00:47 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:41.721 [2024-07-13 08:00:47.517579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:41.721 [2024-07-13 08:00:47.517655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.721 [2024-07-13 08:00:47.517696] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031e80 00:14:41.721 [2024-07-13 08:00:47.517716] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.721 [2024-07-13 08:00:47.517994] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.721 [2024-07-13 08:00:47.518032] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:41.721 [2024-07-13 08:00:47.518096] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:41.721 [2024-07-13 08:00:47.518114] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:41.721 pt2 00:14:41.721 08:00:47 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:41.721 08:00:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:41.721 08:00:47 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:41.979 [2024-07-13 08:00:47.661559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:41.979 [2024-07-13 08:00:47.661634] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.979 [2024-07-13 08:00:47.661665] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000033380 00:14:41.979 [2024-07-13 08:00:47.661688] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.979 [2024-07-13 08:00:47.661927] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.979 [2024-07-13 08:00:47.661973] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:41.979 [2024-07-13 08:00:47.662021] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:14:41.979 [2024-07-13 08:00:47.662039] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:41.979 pt3 00:14:41.979 08:00:47 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:41.979 08:00:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:41.979 08:00:47 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:42.237 [2024-07-13 08:00:47.877638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:42.237 [2024-07-13 08:00:47.877719] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.237 [2024-07-13 08:00:47.877770] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000034880 00:14:42.237 [2024-07-13 08:00:47.877798] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.237 [2024-07-13 08:00:47.878049] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.237 [2024-07-13 08:00:47.878103] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:42.237 [2024-07-13 08:00:47.878151] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:14:42.237 [2024-07-13 08:00:47.878169] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:42.237 [2024-07-13 08:00:47.878235] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000030380 00:14:42.237 [2024-07-13 08:00:47.878243] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:42.237 [2024-07-13 08:00:47.878283] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:14:42.237 [2024-07-13 08:00:47.878432] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000030380 00:14:42.237 [2024-07-13 08:00:47.878449] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000030380 00:14:42.237 [2024-07-13 08:00:47.878511] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.237 pt4 00:14:42.237 08:00:47 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:42.237 08:00:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:42.237 08:00:47 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:42.237 08:00:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:42.237 08:00:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:42.237 08:00:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:42.237 08:00:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:42.237 08:00:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:42.237 08:00:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:42.237 08:00:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:42.237 08:00:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:42.237 08:00:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:42.237 08:00:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.237 08:00:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.495 08:00:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:42.495 "name": "raid_bdev1", 00:14:42.495 "uuid": "104d9cb1-ec77-4d55-b60f-987d7c5bb2cf", 00:14:42.495 "strip_size_kb": 0, 00:14:42.495 "state": "online", 00:14:42.495 "raid_level": "raid1", 00:14:42.495 "superblock": true, 00:14:42.495 "num_base_bdevs": 4, 00:14:42.495 "num_base_bdevs_discovered": 4, 00:14:42.495 "num_base_bdevs_operational": 4, 00:14:42.495 "base_bdevs_list": [ 00:14:42.495 { 00:14:42.495 "name": "pt1", 00:14:42.495 "uuid": "0e4253b4-0825-59df-bf23-119ec531bbca", 00:14:42.495 "is_configured": true, 00:14:42.495 "data_offset": 2048, 00:14:42.495 "data_size": 63488 00:14:42.495 }, 00:14:42.495 { 00:14:42.495 "name": "pt2", 00:14:42.495 "uuid": "5841b68f-d43c-5a56-ba33-92532510f7b1", 00:14:42.495 "is_configured": true, 00:14:42.495 "data_offset": 2048, 00:14:42.495 "data_size": 63488 00:14:42.495 }, 00:14:42.495 { 00:14:42.495 "name": "pt3", 00:14:42.495 "uuid": "3b2e3e41-ccb5-55a0-ba4b-bed24b6f3dc3", 00:14:42.495 "is_configured": true, 00:14:42.495 "data_offset": 2048, 00:14:42.495 "data_size": 63488 00:14:42.495 }, 00:14:42.495 { 00:14:42.495 "name": "pt4", 00:14:42.495 "uuid": "089f305e-9d89-5e22-8a69-cbb23e4ac43a", 00:14:42.495 "is_configured": true, 00:14:42.495 "data_offset": 2048, 00:14:42.495 "data_size": 63488 00:14:42.495 } 00:14:42.495 ] 00:14:42.495 }' 00:14:42.495 08:00:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:42.495 08:00:48 -- common/autotest_common.sh@10 -- # set +x 00:14:43.060 08:00:48 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:43.060 08:00:48 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:43.316 [2024-07-13 08:00:48.945909] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.316 08:00:48 -- bdev/bdev_raid.sh@430 -- # '[' 104d9cb1-ec77-4d55-b60f-987d7c5bb2cf '!=' 104d9cb1-ec77-4d55-b60f-987d7c5bb2cf ']' 00:14:43.316 08:00:48 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:14:43.316 08:00:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:43.316 08:00:48 -- bdev/bdev_raid.sh@196 -- # return 0 00:14:43.316 08:00:48 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:43.316 [2024-07-13 08:00:49.109868] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:43.316 08:00:49 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:43.316 08:00:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:43.316 08:00:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:43.316 08:00:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:43.316 08:00:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:43.316 08:00:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:43.316 08:00:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:43.316 08:00:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:43.316 08:00:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:43.316 08:00:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:43.316 08:00:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.316 08:00:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.573 08:00:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:43.573 "name": "raid_bdev1", 00:14:43.573 "uuid": "104d9cb1-ec77-4d55-b60f-987d7c5bb2cf", 00:14:43.573 "strip_size_kb": 0, 00:14:43.573 "state": "online", 00:14:43.573 "raid_level": "raid1", 00:14:43.573 "superblock": true, 00:14:43.573 "num_base_bdevs": 4, 00:14:43.573 "num_base_bdevs_discovered": 3, 00:14:43.573 "num_base_bdevs_operational": 3, 00:14:43.573 "base_bdevs_list": [ 00:14:43.573 { 00:14:43.573 "name": null, 00:14:43.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.573 "is_configured": false, 00:14:43.573 "data_offset": 2048, 00:14:43.573 "data_size": 63488 00:14:43.573 }, 00:14:43.573 { 00:14:43.573 "name": "pt2", 00:14:43.573 "uuid": "5841b68f-d43c-5a56-ba33-92532510f7b1", 00:14:43.573 "is_configured": true, 00:14:43.573 "data_offset": 2048, 00:14:43.573 "data_size": 63488 00:14:43.573 }, 00:14:43.573 { 00:14:43.573 "name": "pt3", 00:14:43.573 "uuid": "3b2e3e41-ccb5-55a0-ba4b-bed24b6f3dc3", 00:14:43.573 "is_configured": true, 00:14:43.573 "data_offset": 2048, 00:14:43.573 "data_size": 63488 00:14:43.573 }, 00:14:43.573 { 00:14:43.573 "name": "pt4", 00:14:43.573 "uuid": "089f305e-9d89-5e22-8a69-cbb23e4ac43a", 00:14:43.573 "is_configured": true, 00:14:43.573 "data_offset": 2048, 00:14:43.573 "data_size": 63488 00:14:43.573 } 00:14:43.573 ] 00:14:43.573 }' 00:14:43.573 08:00:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:43.573 08:00:49 -- common/autotest_common.sh@10 -- # set +x 00:14:44.140 08:00:49 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:44.140 [2024-07-13 08:00:49.817829] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:44.140 [2024-07-13 08:00:49.817861] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:44.140 [2024-07-13 08:00:49.817905] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.140 [2024-07-13 08:00:49.817944] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:44.140 [2024-07-13 08:00:49.817953] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000030380 name raid_bdev1, state offline 00:14:44.140 08:00:49 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.140 08:00:49 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:14:44.399 08:00:50 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:14:44.399 08:00:50 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:14:44.399 08:00:50 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:14:44.399 08:00:50 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:14:44.399 08:00:50 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:44.656 08:00:50 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:14:44.656 08:00:50 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:14:44.656 08:00:50 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:44.913 08:00:50 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:14:44.913 08:00:50 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:14:44.913 08:00:50 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:14:44.913 08:00:50 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:14:44.913 08:00:50 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:14:44.913 08:00:50 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:14:44.913 08:00:50 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:14:44.913 08:00:50 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:45.171 [2024-07-13 08:00:50.773906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:45.171 [2024-07-13 08:00:50.773964] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.171 [2024-07-13 08:00:50.774008] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000035d80 00:14:45.171 [2024-07-13 08:00:50.774033] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.171 [2024-07-13 08:00:50.775534] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.171 [2024-07-13 08:00:50.775584] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:45.171 [2024-07-13 08:00:50.775640] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:45.171 [2024-07-13 08:00:50.775667] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:45.171 pt2 00:14:45.171 08:00:50 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:45.171 08:00:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:45.171 08:00:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:45.171 08:00:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:45.171 08:00:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:45.171 08:00:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:45.171 08:00:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:45.171 08:00:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:45.171 08:00:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:45.171 08:00:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:45.171 08:00:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.171 08:00:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.430 08:00:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:45.430 "name": "raid_bdev1", 00:14:45.430 "uuid": "104d9cb1-ec77-4d55-b60f-987d7c5bb2cf", 00:14:45.430 "strip_size_kb": 0, 00:14:45.430 "state": "configuring", 00:14:45.430 "raid_level": "raid1", 00:14:45.430 "superblock": true, 00:14:45.430 "num_base_bdevs": 4, 00:14:45.430 "num_base_bdevs_discovered": 1, 00:14:45.430 "num_base_bdevs_operational": 3, 00:14:45.430 "base_bdevs_list": [ 00:14:45.430 { 00:14:45.430 "name": null, 00:14:45.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.430 "is_configured": false, 00:14:45.430 "data_offset": 2048, 00:14:45.430 "data_size": 63488 00:14:45.430 }, 00:14:45.430 { 00:14:45.430 "name": "pt2", 00:14:45.430 "uuid": "5841b68f-d43c-5a56-ba33-92532510f7b1", 00:14:45.430 "is_configured": true, 00:14:45.430 "data_offset": 2048, 00:14:45.430 "data_size": 63488 00:14:45.430 }, 00:14:45.430 { 00:14:45.430 "name": null, 00:14:45.430 "uuid": "3b2e3e41-ccb5-55a0-ba4b-bed24b6f3dc3", 00:14:45.430 "is_configured": false, 00:14:45.430 "data_offset": 2048, 00:14:45.430 "data_size": 63488 00:14:45.430 }, 00:14:45.430 { 00:14:45.430 "name": null, 00:14:45.430 "uuid": "089f305e-9d89-5e22-8a69-cbb23e4ac43a", 00:14:45.430 "is_configured": false, 00:14:45.430 "data_offset": 2048, 00:14:45.430 "data_size": 63488 00:14:45.430 } 00:14:45.430 ] 00:14:45.430 }' 00:14:45.430 08:00:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:45.430 08:00:50 -- common/autotest_common.sh@10 -- # set +x 00:14:45.997 08:00:51 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:14:45.997 08:00:51 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:14:45.997 08:00:51 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:45.997 [2024-07-13 08:00:51.782011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:45.997 [2024-07-13 08:00:51.782065] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.997 [2024-07-13 08:00:51.782103] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000037880 00:14:45.997 [2024-07-13 08:00:51.782127] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.997 [2024-07-13 08:00:51.782373] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.997 [2024-07-13 08:00:51.782405] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:45.997 [2024-07-13 08:00:51.782453] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:14:45.997 [2024-07-13 08:00:51.782481] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:45.997 pt3 00:14:45.997 08:00:51 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:45.997 08:00:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:45.997 08:00:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:45.997 08:00:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:45.997 08:00:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:45.997 08:00:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:45.997 08:00:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:45.997 08:00:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:45.997 08:00:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:45.997 08:00:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:45.997 08:00:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.997 08:00:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.256 08:00:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:46.256 "name": "raid_bdev1", 00:14:46.256 "uuid": "104d9cb1-ec77-4d55-b60f-987d7c5bb2cf", 00:14:46.256 "strip_size_kb": 0, 00:14:46.256 "state": "configuring", 00:14:46.256 "raid_level": "raid1", 00:14:46.256 "superblock": true, 00:14:46.256 "num_base_bdevs": 4, 00:14:46.256 "num_base_bdevs_discovered": 2, 00:14:46.256 "num_base_bdevs_operational": 3, 00:14:46.256 "base_bdevs_list": [ 00:14:46.256 { 00:14:46.256 "name": null, 00:14:46.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.256 "is_configured": false, 00:14:46.256 "data_offset": 2048, 00:14:46.256 "data_size": 63488 00:14:46.256 }, 00:14:46.256 { 00:14:46.256 "name": "pt2", 00:14:46.256 "uuid": "5841b68f-d43c-5a56-ba33-92532510f7b1", 00:14:46.256 "is_configured": true, 00:14:46.256 "data_offset": 2048, 00:14:46.256 "data_size": 63488 00:14:46.256 }, 00:14:46.256 { 00:14:46.256 "name": "pt3", 00:14:46.256 "uuid": "3b2e3e41-ccb5-55a0-ba4b-bed24b6f3dc3", 00:14:46.256 "is_configured": true, 00:14:46.256 "data_offset": 2048, 00:14:46.256 "data_size": 63488 00:14:46.256 }, 00:14:46.256 { 00:14:46.256 "name": null, 00:14:46.256 "uuid": "089f305e-9d89-5e22-8a69-cbb23e4ac43a", 00:14:46.256 "is_configured": false, 00:14:46.256 "data_offset": 2048, 00:14:46.256 "data_size": 63488 00:14:46.256 } 00:14:46.256 ] 00:14:46.256 }' 00:14:46.256 08:00:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:46.256 08:00:51 -- common/autotest_common.sh@10 -- # set +x 00:14:46.823 08:00:52 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:14:46.823 08:00:52 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:14:46.823 08:00:52 -- bdev/bdev_raid.sh@462 -- # i=3 00:14:46.823 08:00:52 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:47.081 [2024-07-13 08:00:52.786138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:47.081 [2024-07-13 08:00:52.786209] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.081 [2024-07-13 08:00:52.786253] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000038d80 00:14:47.081 [2024-07-13 08:00:52.786271] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.081 [2024-07-13 08:00:52.786521] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.081 [2024-07-13 08:00:52.786548] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:47.081 [2024-07-13 08:00:52.786595] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:14:47.081 [2024-07-13 08:00:52.786613] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:47.081 [2024-07-13 08:00:52.786675] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000037280 00:14:47.081 [2024-07-13 08:00:52.786683] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:47.081 [2024-07-13 08:00:52.786722] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:14:47.081 [2024-07-13 08:00:52.786878] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000037280 00:14:47.081 [2024-07-13 08:00:52.786888] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000037280 00:14:47.081 [2024-07-13 08:00:52.786937] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.081 pt4 00:14:47.081 08:00:52 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:47.081 08:00:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:47.081 08:00:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:47.081 08:00:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:47.081 08:00:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:47.081 08:00:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:47.081 08:00:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:47.081 08:00:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:47.081 08:00:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:47.081 08:00:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:47.081 08:00:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.081 08:00:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.339 08:00:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:47.339 "name": "raid_bdev1", 00:14:47.339 "uuid": "104d9cb1-ec77-4d55-b60f-987d7c5bb2cf", 00:14:47.339 "strip_size_kb": 0, 00:14:47.339 "state": "online", 00:14:47.339 "raid_level": "raid1", 00:14:47.339 "superblock": true, 00:14:47.339 "num_base_bdevs": 4, 00:14:47.339 "num_base_bdevs_discovered": 3, 00:14:47.339 "num_base_bdevs_operational": 3, 00:14:47.339 "base_bdevs_list": [ 00:14:47.339 { 00:14:47.339 "name": null, 00:14:47.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.339 "is_configured": false, 00:14:47.339 "data_offset": 2048, 00:14:47.339 "data_size": 63488 00:14:47.339 }, 00:14:47.339 { 00:14:47.339 "name": "pt2", 00:14:47.339 "uuid": "5841b68f-d43c-5a56-ba33-92532510f7b1", 00:14:47.339 "is_configured": true, 00:14:47.339 "data_offset": 2048, 00:14:47.339 "data_size": 63488 00:14:47.339 }, 00:14:47.339 { 00:14:47.339 "name": "pt3", 00:14:47.339 "uuid": "3b2e3e41-ccb5-55a0-ba4b-bed24b6f3dc3", 00:14:47.339 "is_configured": true, 00:14:47.339 "data_offset": 2048, 00:14:47.339 "data_size": 63488 00:14:47.339 }, 00:14:47.339 { 00:14:47.339 "name": "pt4", 00:14:47.339 "uuid": "089f305e-9d89-5e22-8a69-cbb23e4ac43a", 00:14:47.339 "is_configured": true, 00:14:47.340 "data_offset": 2048, 00:14:47.340 "data_size": 63488 00:14:47.340 } 00:14:47.340 ] 00:14:47.340 }' 00:14:47.340 08:00:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:47.340 08:00:52 -- common/autotest_common.sh@10 -- # set +x 00:14:47.904 08:00:53 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:14:47.905 08:00:53 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:47.905 [2024-07-13 08:00:53.642203] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:47.905 [2024-07-13 08:00:53.642230] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:47.905 [2024-07-13 08:00:53.642286] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:47.905 [2024-07-13 08:00:53.642322] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:47.905 [2024-07-13 08:00:53.642331] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000037280 name raid_bdev1, state offline 00:14:47.905 08:00:53 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.905 08:00:53 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:14:48.162 08:00:53 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:14:48.162 08:00:53 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:14:48.162 08:00:53 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:48.421 [2024-07-13 08:00:54.066256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:48.421 [2024-07-13 08:00:54.066332] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.421 [2024-07-13 08:00:54.066384] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003a280 00:14:48.421 [2024-07-13 08:00:54.066404] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.421 [2024-07-13 08:00:54.067933] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.421 [2024-07-13 08:00:54.067993] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:48.421 [2024-07-13 08:00:54.068043] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:48.421 [2024-07-13 08:00:54.068068] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:48.421 pt1 00:14:48.421 08:00:54 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:14:48.421 08:00:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:48.421 08:00:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:48.421 08:00:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:48.421 08:00:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:48.421 08:00:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:48.421 08:00:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:48.421 08:00:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:48.421 08:00:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:48.421 08:00:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:48.421 08:00:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.421 08:00:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.679 08:00:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:48.679 "name": "raid_bdev1", 00:14:48.679 "uuid": "104d9cb1-ec77-4d55-b60f-987d7c5bb2cf", 00:14:48.679 "strip_size_kb": 0, 00:14:48.679 "state": "configuring", 00:14:48.679 "raid_level": "raid1", 00:14:48.679 "superblock": true, 00:14:48.679 "num_base_bdevs": 4, 00:14:48.679 "num_base_bdevs_discovered": 1, 00:14:48.679 "num_base_bdevs_operational": 4, 00:14:48.679 "base_bdevs_list": [ 00:14:48.679 { 00:14:48.679 "name": "pt1", 00:14:48.679 "uuid": "0e4253b4-0825-59df-bf23-119ec531bbca", 00:14:48.679 "is_configured": true, 00:14:48.679 "data_offset": 2048, 00:14:48.679 "data_size": 63488 00:14:48.679 }, 00:14:48.679 { 00:14:48.679 "name": null, 00:14:48.679 "uuid": "5841b68f-d43c-5a56-ba33-92532510f7b1", 00:14:48.679 "is_configured": false, 00:14:48.679 "data_offset": 2048, 00:14:48.679 "data_size": 63488 00:14:48.679 }, 00:14:48.679 { 00:14:48.679 "name": null, 00:14:48.679 "uuid": "3b2e3e41-ccb5-55a0-ba4b-bed24b6f3dc3", 00:14:48.679 "is_configured": false, 00:14:48.679 "data_offset": 2048, 00:14:48.679 "data_size": 63488 00:14:48.679 }, 00:14:48.679 { 00:14:48.679 "name": null, 00:14:48.679 "uuid": "089f305e-9d89-5e22-8a69-cbb23e4ac43a", 00:14:48.679 "is_configured": false, 00:14:48.679 "data_offset": 2048, 00:14:48.679 "data_size": 63488 00:14:48.679 } 00:14:48.679 ] 00:14:48.679 }' 00:14:48.679 08:00:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:48.679 08:00:54 -- common/autotest_common.sh@10 -- # set +x 00:14:49.244 08:00:54 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:14:49.244 08:00:54 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:14:49.244 08:00:54 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:49.244 08:00:55 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:14:49.244 08:00:55 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:14:49.244 08:00:55 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:49.501 08:00:55 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:14:49.501 08:00:55 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:14:49.501 08:00:55 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:14:49.759 08:00:55 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:14:49.759 08:00:55 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:14:49.759 08:00:55 -- bdev/bdev_raid.sh@489 -- # i=3 00:14:49.759 08:00:55 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:49.759 [2024-07-13 08:00:55.494395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:49.759 [2024-07-13 08:00:55.494468] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.759 [2024-07-13 08:00:55.494510] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003bd80 00:14:49.759 [2024-07-13 08:00:55.494535] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.759 [2024-07-13 08:00:55.494780] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.759 [2024-07-13 08:00:55.494815] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:49.759 [2024-07-13 08:00:55.494878] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:14:49.759 [2024-07-13 08:00:55.494889] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:49.759 [2024-07-13 08:00:55.494897] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:49.759 [2024-07-13 08:00:55.494916] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003b780 name raid_bdev1, state configuring 00:14:49.759 [2024-07-13 08:00:55.494953] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:49.759 pt4 00:14:49.759 08:00:55 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:49.759 08:00:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:49.759 08:00:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:49.759 08:00:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:49.759 08:00:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:49.759 08:00:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:49.759 08:00:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:49.759 08:00:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:49.759 08:00:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:49.759 08:00:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:49.759 08:00:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.759 08:00:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.018 08:00:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:50.018 "name": "raid_bdev1", 00:14:50.018 "uuid": "104d9cb1-ec77-4d55-b60f-987d7c5bb2cf", 00:14:50.018 "strip_size_kb": 0, 00:14:50.018 "state": "configuring", 00:14:50.018 "raid_level": "raid1", 00:14:50.018 "superblock": true, 00:14:50.018 "num_base_bdevs": 4, 00:14:50.018 "num_base_bdevs_discovered": 1, 00:14:50.018 "num_base_bdevs_operational": 3, 00:14:50.018 "base_bdevs_list": [ 00:14:50.018 { 00:14:50.018 "name": null, 00:14:50.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.018 "is_configured": false, 00:14:50.018 "data_offset": 2048, 00:14:50.018 "data_size": 63488 00:14:50.018 }, 00:14:50.018 { 00:14:50.018 "name": null, 00:14:50.018 "uuid": "5841b68f-d43c-5a56-ba33-92532510f7b1", 00:14:50.018 "is_configured": false, 00:14:50.018 "data_offset": 2048, 00:14:50.018 "data_size": 63488 00:14:50.018 }, 00:14:50.018 { 00:14:50.018 "name": null, 00:14:50.018 "uuid": "3b2e3e41-ccb5-55a0-ba4b-bed24b6f3dc3", 00:14:50.018 "is_configured": false, 00:14:50.018 "data_offset": 2048, 00:14:50.018 "data_size": 63488 00:14:50.018 }, 00:14:50.018 { 00:14:50.018 "name": "pt4", 00:14:50.018 "uuid": "089f305e-9d89-5e22-8a69-cbb23e4ac43a", 00:14:50.018 "is_configured": true, 00:14:50.018 "data_offset": 2048, 00:14:50.018 "data_size": 63488 00:14:50.018 } 00:14:50.018 ] 00:14:50.018 }' 00:14:50.018 08:00:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:50.018 08:00:55 -- common/autotest_common.sh@10 -- # set +x 00:14:50.585 08:00:56 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:14:50.585 08:00:56 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:14:50.585 08:00:56 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:50.585 [2024-07-13 08:00:56.362528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:50.585 [2024-07-13 08:00:56.362614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.585 [2024-07-13 08:00:56.362677] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003d580 00:14:50.585 [2024-07-13 08:00:56.362700] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.585 [2024-07-13 08:00:56.362963] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.585 [2024-07-13 08:00:56.363015] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:50.585 [2024-07-13 08:00:56.363058] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:50.585 [2024-07-13 08:00:56.363075] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:50.585 pt2 00:14:50.585 08:00:56 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:14:50.585 08:00:56 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:14:50.585 08:00:56 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:50.844 [2024-07-13 08:00:56.582526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:50.844 [2024-07-13 08:00:56.582588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.844 [2024-07-13 08:00:56.582637] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003ea80 00:14:50.844 [2024-07-13 08:00:56.582663] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.844 [2024-07-13 08:00:56.582889] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.844 [2024-07-13 08:00:56.582923] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:50.844 [2024-07-13 08:00:56.582968] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:14:50.844 [2024-07-13 08:00:56.582985] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:50.844 [2024-07-13 08:00:56.583038] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003cf80 00:14:50.844 [2024-07-13 08:00:56.583046] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:50.844 [2024-07-13 08:00:56.583090] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002fc0 00:14:50.844 [2024-07-13 08:00:56.583251] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003cf80 00:14:50.844 [2024-07-13 08:00:56.583269] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600003cf80 00:14:50.844 [2024-07-13 08:00:56.583320] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.844 pt3 00:14:50.844 08:00:56 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:14:50.844 08:00:56 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:14:50.844 08:00:56 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:50.844 08:00:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:50.844 08:00:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:50.844 08:00:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:50.844 08:00:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:50.844 08:00:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:50.844 08:00:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:50.844 08:00:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:50.844 08:00:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:50.844 08:00:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:50.844 08:00:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.844 08:00:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.104 08:00:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:51.104 "name": "raid_bdev1", 00:14:51.104 "uuid": "104d9cb1-ec77-4d55-b60f-987d7c5bb2cf", 00:14:51.104 "strip_size_kb": 0, 00:14:51.104 "state": "online", 00:14:51.104 "raid_level": "raid1", 00:14:51.104 "superblock": true, 00:14:51.104 "num_base_bdevs": 4, 00:14:51.104 "num_base_bdevs_discovered": 3, 00:14:51.104 "num_base_bdevs_operational": 3, 00:14:51.104 "base_bdevs_list": [ 00:14:51.104 { 00:14:51.104 "name": null, 00:14:51.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.104 "is_configured": false, 00:14:51.104 "data_offset": 2048, 00:14:51.104 "data_size": 63488 00:14:51.104 }, 00:14:51.104 { 00:14:51.104 "name": "pt2", 00:14:51.104 "uuid": "5841b68f-d43c-5a56-ba33-92532510f7b1", 00:14:51.104 "is_configured": true, 00:14:51.104 "data_offset": 2048, 00:14:51.104 "data_size": 63488 00:14:51.104 }, 00:14:51.104 { 00:14:51.104 "name": "pt3", 00:14:51.104 "uuid": "3b2e3e41-ccb5-55a0-ba4b-bed24b6f3dc3", 00:14:51.104 "is_configured": true, 00:14:51.104 "data_offset": 2048, 00:14:51.104 "data_size": 63488 00:14:51.104 }, 00:14:51.104 { 00:14:51.104 "name": "pt4", 00:14:51.104 "uuid": "089f305e-9d89-5e22-8a69-cbb23e4ac43a", 00:14:51.104 "is_configured": true, 00:14:51.104 "data_offset": 2048, 00:14:51.104 "data_size": 63488 00:14:51.104 } 00:14:51.104 ] 00:14:51.104 }' 00:14:51.104 08:00:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:51.104 08:00:56 -- common/autotest_common.sh@10 -- # set +x 00:14:51.671 08:00:57 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:51.671 08:00:57 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:14:51.929 [2024-07-13 08:00:57.534750] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:51.929 08:00:57 -- bdev/bdev_raid.sh@506 -- # '[' 104d9cb1-ec77-4d55-b60f-987d7c5bb2cf '!=' 104d9cb1-ec77-4d55-b60f-987d7c5bb2cf ']' 00:14:51.929 08:00:57 -- bdev/bdev_raid.sh@511 -- # killprocess 66468 00:14:51.929 08:00:57 -- common/autotest_common.sh@926 -- # '[' -z 66468 ']' 00:14:51.929 08:00:57 -- common/autotest_common.sh@930 -- # kill -0 66468 00:14:51.929 08:00:57 -- common/autotest_common.sh@931 -- # uname 00:14:51.929 08:00:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:51.929 08:00:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66468 00:14:51.929 killing process with pid 66468 00:14:51.929 08:00:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:51.929 08:00:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:51.929 08:00:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66468' 00:14:51.929 08:00:57 -- common/autotest_common.sh@945 -- # kill 66468 00:14:51.929 [2024-07-13 08:00:57.578330] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:51.929 08:00:57 -- common/autotest_common.sh@950 -- # wait 66468 00:14:51.929 [2024-07-13 08:00:57.578373] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:51.929 [2024-07-13 08:00:57.578411] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:51.929 [2024-07-13 08:00:57.578419] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003cf80 name raid_bdev1, state offline 00:14:51.929 [2024-07-13 08:00:57.617366] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:52.188 08:00:57 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:52.188 00:14:52.188 real 0m17.622s 00:14:52.188 user 0m33.263s 00:14:52.188 sys 0m2.170s 00:14:52.188 08:00:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:52.188 08:00:57 -- common/autotest_common.sh@10 -- # set +x 00:14:52.188 ************************************ 00:14:52.188 END TEST raid_superblock_test 00:14:52.188 ************************************ 00:14:52.188 08:00:57 -- bdev/bdev_raid.sh@733 -- # '[' '' = true ']' 00:14:52.188 08:00:57 -- bdev/bdev_raid.sh@742 -- # '[' n == y ']' 00:14:52.188 08:00:57 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:14:52.188 ************************************ 00:14:52.188 END TEST bdev_raid 00:14:52.188 ************************************ 00:14:52.188 00:14:52.188 real 4m29.202s 00:14:52.188 user 8m9.477s 00:14:52.188 sys 0m36.753s 00:14:52.188 08:00:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:52.188 08:00:57 -- common/autotest_common.sh@10 -- # set +x 00:14:52.188 08:00:57 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:14:52.188 08:00:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:52.188 08:00:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:52.188 08:00:57 -- common/autotest_common.sh@10 -- # set +x 00:14:52.188 ************************************ 00:14:52.188 START TEST bdevperf_config 00:14:52.188 ************************************ 00:14:52.188 08:00:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:14:52.188 * Looking for test storage... 00:14:52.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:14:52.188 08:00:57 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:14:52.188 08:00:57 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:14:52.188 08:00:57 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:14:52.188 08:00:57 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:14:52.188 08:00:57 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:52.188 08:00:57 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:14:52.188 08:00:57 -- bdevperf/common.sh@8 -- # local job_section=global 00:14:52.188 08:00:57 -- bdevperf/common.sh@9 -- # local rw=read 00:14:52.188 08:00:57 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:14:52.188 08:00:57 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:14:52.188 08:00:57 -- bdevperf/common.sh@13 -- # cat 00:14:52.188 08:00:57 -- bdevperf/common.sh@18 -- # job='[global]' 00:14:52.188 08:00:57 -- bdevperf/common.sh@19 -- # echo 00:14:52.188 00:14:52.188 08:00:57 -- bdevperf/common.sh@20 -- # cat 00:14:52.188 00:14:52.188 08:00:57 -- bdevperf/test_config.sh@18 -- # create_job job0 00:14:52.188 08:00:57 -- bdevperf/common.sh@8 -- # local job_section=job0 00:14:52.188 08:00:57 -- bdevperf/common.sh@9 -- # local rw= 00:14:52.188 08:00:57 -- bdevperf/common.sh@10 -- # local filename= 00:14:52.188 08:00:57 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:14:52.189 08:00:57 -- bdevperf/common.sh@18 -- # job='[job0]' 00:14:52.189 08:00:57 -- bdevperf/common.sh@19 -- # echo 00:14:52.189 08:00:57 -- bdevperf/common.sh@20 -- # cat 00:14:52.447 00:14:52.447 08:00:58 -- bdevperf/test_config.sh@19 -- # create_job job1 00:14:52.447 08:00:58 -- bdevperf/common.sh@8 -- # local job_section=job1 00:14:52.447 08:00:58 -- bdevperf/common.sh@9 -- # local rw= 00:14:52.447 08:00:58 -- bdevperf/common.sh@10 -- # local filename= 00:14:52.447 08:00:58 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:14:52.447 08:00:58 -- bdevperf/common.sh@18 -- # job='[job1]' 00:14:52.447 08:00:58 -- bdevperf/common.sh@19 -- # echo 00:14:52.447 08:00:58 -- bdevperf/common.sh@20 -- # cat 00:14:52.447 00:14:52.447 08:00:58 -- bdevperf/test_config.sh@20 -- # create_job job2 00:14:52.447 08:00:58 -- bdevperf/common.sh@8 -- # local job_section=job2 00:14:52.447 08:00:58 -- bdevperf/common.sh@9 -- # local rw= 00:14:52.447 08:00:58 -- bdevperf/common.sh@10 -- # local filename= 00:14:52.447 08:00:58 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:14:52.447 08:00:58 -- bdevperf/common.sh@18 -- # job='[job2]' 00:14:52.447 08:00:58 -- bdevperf/common.sh@19 -- # echo 00:14:52.448 08:00:58 -- bdevperf/common.sh@20 -- # cat 00:14:52.448 00:14:52.448 08:00:58 -- bdevperf/test_config.sh@21 -- # create_job job3 00:14:52.448 08:00:58 -- bdevperf/common.sh@8 -- # local job_section=job3 00:14:52.448 08:00:58 -- bdevperf/common.sh@9 -- # local rw= 00:14:52.448 08:00:58 -- bdevperf/common.sh@10 -- # local filename= 00:14:52.448 08:00:58 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:14:52.448 08:00:58 -- bdevperf/common.sh@18 -- # job='[job3]' 00:14:52.448 08:00:58 -- bdevperf/common.sh@19 -- # echo 00:14:52.448 08:00:58 -- bdevperf/common.sh@20 -- # cat 00:14:52.448 08:00:58 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:14:54.980 08:01:00 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-13 08:00:58.150246] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:54.980 [2024-07-13 08:00:58.150499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67161 ] 00:14:54.980 Using job config with 4 jobs 00:14:54.980 [2024-07-13 08:00:58.295086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.980 [2024-07-13 08:00:58.351814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.980 cpumask for '\''job0'\'' is too big 00:14:54.980 cpumask for '\''job1'\'' is too big 00:14:54.980 cpumask for '\''job2'\'' is too big 00:14:54.980 cpumask for '\''job3'\'' is too big 00:14:54.980 Running I/O for 2 seconds... 00:14:54.980 00:14:54.980 Latency(us) 00:14:54.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.980 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:14:54.980 Malloc0 : 2.00 106289.48 103.80 0.00 0.00 2407.30 534.43 4025.78 00:14:54.980 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:14:54.980 Malloc0 : 2.00 106272.93 103.78 0.00 0.00 2406.24 487.62 3526.46 00:14:54.980 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:14:54.980 Malloc0 : 2.00 106257.24 103.77 0.00 0.00 2405.46 487.62 3042.74 00:14:54.980 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:14:54.980 Malloc0 : 2.01 106334.77 103.84 0.00 0.00 2402.19 235.03 2839.89 00:14:54.980 =================================================================================================================== 00:14:54.980 Total : 425154.42 415.19 0.00 0.00 2405.30 235.03 4025.78' 00:14:54.980 08:01:00 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-13 08:00:58.150246] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:54.980 [2024-07-13 08:00:58.150499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67161 ] 00:14:54.980 Using job config with 4 jobs 00:14:54.980 [2024-07-13 08:00:58.295086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.980 [2024-07-13 08:00:58.351814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.980 cpumask for '\''job0'\'' is too big 00:14:54.980 cpumask for '\''job1'\'' is too big 00:14:54.980 cpumask for '\''job2'\'' is too big 00:14:54.980 cpumask for '\''job3'\'' is too big 00:14:54.980 Running I/O for 2 seconds... 00:14:54.980 00:14:54.980 Latency(us) 00:14:54.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.980 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:14:54.980 Malloc0 : 2.00 106289.48 103.80 0.00 0.00 2407.30 534.43 4025.78 00:14:54.980 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:14:54.980 Malloc0 : 2.00 106272.93 103.78 0.00 0.00 2406.24 487.62 3526.46 00:14:54.980 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:14:54.980 Malloc0 : 2.00 106257.24 103.77 0.00 0.00 2405.46 487.62 3042.74 00:14:54.980 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:14:54.980 Malloc0 : 2.01 106334.77 103.84 0.00 0.00 2402.19 235.03 2839.89 00:14:54.980 =================================================================================================================== 00:14:54.980 Total : 425154.42 415.19 0.00 0.00 2405.30 235.03 4025.78' 00:14:54.980 08:01:00 -- bdevperf/common.sh@32 -- # echo '[2024-07-13 08:00:58.150246] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:54.980 [2024-07-13 08:00:58.150499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67161 ] 00:14:54.980 Using job config with 4 jobs 00:14:54.980 [2024-07-13 08:00:58.295086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.980 [2024-07-13 08:00:58.351814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.980 cpumask for '\''job0'\'' is too big 00:14:54.980 cpumask for '\''job1'\'' is too big 00:14:54.980 cpumask for '\''job2'\'' is too big 00:14:54.980 cpumask for '\''job3'\'' is too big 00:14:54.980 Running I/O for 2 seconds... 00:14:54.980 00:14:54.980 Latency(us) 00:14:54.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.980 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:14:54.980 Malloc0 : 2.00 106289.48 103.80 0.00 0.00 2407.30 534.43 4025.78 00:14:54.980 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:14:54.980 Malloc0 : 2.00 106272.93 103.78 0.00 0.00 2406.24 487.62 3526.46 00:14:54.980 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:14:54.980 Malloc0 : 2.00 106257.24 103.77 0.00 0.00 2405.46 487.62 3042.74 00:14:54.980 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:14:54.981 Malloc0 : 2.01 106334.77 103.84 0.00 0.00 2402.19 235.03 2839.89 00:14:54.981 =================================================================================================================== 00:14:54.981 Total : 425154.42 415.19 0.00 0.00 2405.30 235.03 4025.78' 00:14:54.981 08:01:00 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:14:54.981 08:01:00 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:14:54.981 08:01:00 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:14:54.981 08:01:00 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:14:55.239 [2024-07-13 08:01:00.862823] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:55.239 [2024-07-13 08:01:00.863007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67200 ] 00:14:55.239 [2024-07-13 08:01:00.993649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.239 [2024-07-13 08:01:01.047134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.498 cpumask for 'job0' is too big 00:14:55.498 cpumask for 'job1' is too big 00:14:55.498 cpumask for 'job2' is too big 00:14:55.498 cpumask for 'job3' is too big 00:14:58.035 08:01:03 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:14:58.035 Running I/O for 2 seconds... 00:14:58.035 00:14:58.035 Latency(us) 00:14:58.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.035 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:14:58.035 Malloc0 : 2.00 105666.01 103.19 0.00 0.00 2421.62 526.63 4150.61 00:14:58.035 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:14:58.035 Malloc0 : 2.00 105649.40 103.17 0.00 0.00 2420.52 487.62 3620.08 00:14:58.035 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:14:58.035 Malloc0 : 2.01 105694.66 103.22 0.00 0.00 2418.22 514.93 3058.35 00:14:58.035 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:14:58.035 Malloc0 : 2.01 105679.17 103.20 0.00 0.00 2417.17 511.02 2995.93 00:14:58.035 =================================================================================================================== 00:14:58.035 Total : 422689.24 412.78 0.00 0.00 2419.38 487.62 4150.61' 00:14:58.035 08:01:03 -- bdevperf/test_config.sh@27 -- # cleanup 00:14:58.035 08:01:03 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:14:58.035 00:14:58.035 08:01:03 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:14:58.035 08:01:03 -- bdevperf/common.sh@8 -- # local job_section=job0 00:14:58.035 08:01:03 -- bdevperf/common.sh@9 -- # local rw=write 00:14:58.035 08:01:03 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:14:58.035 08:01:03 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:14:58.035 08:01:03 -- bdevperf/common.sh@18 -- # job='[job0]' 00:14:58.035 08:01:03 -- bdevperf/common.sh@19 -- # echo 00:14:58.035 08:01:03 -- bdevperf/common.sh@20 -- # cat 00:14:58.035 00:14:58.035 08:01:03 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:14:58.035 08:01:03 -- bdevperf/common.sh@8 -- # local job_section=job1 00:14:58.035 08:01:03 -- bdevperf/common.sh@9 -- # local rw=write 00:14:58.035 08:01:03 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:14:58.035 08:01:03 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:14:58.035 08:01:03 -- bdevperf/common.sh@18 -- # job='[job1]' 00:14:58.035 08:01:03 -- bdevperf/common.sh@19 -- # echo 00:14:58.035 08:01:03 -- bdevperf/common.sh@20 -- # cat 00:14:58.035 00:14:58.035 08:01:03 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:14:58.035 08:01:03 -- bdevperf/common.sh@8 -- # local job_section=job2 00:14:58.035 08:01:03 -- bdevperf/common.sh@9 -- # local rw=write 00:14:58.035 08:01:03 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:14:58.035 08:01:03 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:14:58.035 08:01:03 -- bdevperf/common.sh@18 -- # job='[job2]' 00:14:58.035 08:01:03 -- bdevperf/common.sh@19 -- # echo 00:14:58.035 08:01:03 -- bdevperf/common.sh@20 -- # cat 00:14:58.035 08:01:03 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:15:00.566 08:01:06 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-13 08:01:03.554725] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:00.566 [2024-07-13 08:01:03.554921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67256 ] 00:15:00.566 Using job config with 3 jobs 00:15:00.566 [2024-07-13 08:01:03.683941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.566 [2024-07-13 08:01:03.732757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.566 cpumask for '\''job0'\'' is too big 00:15:00.566 cpumask for '\''job1'\'' is too big 00:15:00.566 cpumask for '\''job2'\'' is too big 00:15:00.566 Running I/O for 2 seconds... 00:15:00.566 00:15:00.566 Latency(us) 00:15:00.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.566 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:00.566 Malloc0 : 2.00 137153.84 133.94 0.00 0.00 1865.44 526.63 2933.52 00:15:00.566 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:00.566 Malloc0 : 2.00 137131.72 133.92 0.00 0.00 1864.61 485.67 2808.69 00:15:00.566 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:00.566 Malloc0 : 2.00 137190.76 133.98 0.00 0.00 1862.73 245.76 2808.69 00:15:00.566 =================================================================================================================== 00:15:00.566 Total : 411476.32 401.83 0.00 0.00 1864.26 245.76 2933.52' 00:15:00.566 08:01:06 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-13 08:01:03.554725] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:00.566 [2024-07-13 08:01:03.554921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67256 ] 00:15:00.566 Using job config with 3 jobs 00:15:00.566 [2024-07-13 08:01:03.683941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.566 [2024-07-13 08:01:03.732757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.566 cpumask for '\''job0'\'' is too big 00:15:00.566 cpumask for '\''job1'\'' is too big 00:15:00.566 cpumask for '\''job2'\'' is too big 00:15:00.566 Running I/O for 2 seconds... 00:15:00.566 00:15:00.566 Latency(us) 00:15:00.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.566 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:00.566 Malloc0 : 2.00 137153.84 133.94 0.00 0.00 1865.44 526.63 2933.52 00:15:00.566 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:00.566 Malloc0 : 2.00 137131.72 133.92 0.00 0.00 1864.61 485.67 2808.69 00:15:00.566 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:00.566 Malloc0 : 2.00 137190.76 133.98 0.00 0.00 1862.73 245.76 2808.69 00:15:00.566 =================================================================================================================== 00:15:00.566 Total : 411476.32 401.83 0.00 0.00 1864.26 245.76 2933.52' 00:15:00.566 08:01:06 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:15:00.566 08:01:06 -- bdevperf/common.sh@32 -- # echo '[2024-07-13 08:01:03.554725] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:00.566 [2024-07-13 08:01:03.554921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67256 ] 00:15:00.566 Using job config with 3 jobs 00:15:00.566 [2024-07-13 08:01:03.683941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.566 [2024-07-13 08:01:03.732757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.566 cpumask for '\''job0'\'' is too big 00:15:00.566 cpumask for '\''job1'\'' is too big 00:15:00.566 cpumask for '\''job2'\'' is too big 00:15:00.566 Running I/O for 2 seconds... 00:15:00.566 00:15:00.566 Latency(us) 00:15:00.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.566 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:00.566 Malloc0 : 2.00 137153.84 133.94 0.00 0.00 1865.44 526.63 2933.52 00:15:00.566 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:00.566 Malloc0 : 2.00 137131.72 133.92 0.00 0.00 1864.61 485.67 2808.69 00:15:00.566 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:00.566 Malloc0 : 2.00 137190.76 133.98 0.00 0.00 1862.73 245.76 2808.69 00:15:00.566 =================================================================================================================== 00:15:00.566 Total : 411476.32 401.83 0.00 0.00 1864.26 245.76 2933.52' 00:15:00.566 08:01:06 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:15:00.566 08:01:06 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:15:00.566 08:01:06 -- bdevperf/test_config.sh@35 -- # cleanup 00:15:00.566 08:01:06 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:15:00.566 08:01:06 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:15:00.566 08:01:06 -- bdevperf/common.sh@8 -- # local job_section=global 00:15:00.566 08:01:06 -- bdevperf/common.sh@9 -- # local rw=rw 00:15:00.566 08:01:06 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:15:00.566 08:01:06 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:15:00.566 08:01:06 -- bdevperf/common.sh@13 -- # cat 00:15:00.566 00:15:00.566 08:01:06 -- bdevperf/common.sh@18 -- # job='[global]' 00:15:00.566 08:01:06 -- bdevperf/common.sh@19 -- # echo 00:15:00.566 08:01:06 -- bdevperf/common.sh@20 -- # cat 00:15:00.566 00:15:00.566 08:01:06 -- bdevperf/test_config.sh@38 -- # create_job job0 00:15:00.566 08:01:06 -- bdevperf/common.sh@8 -- # local job_section=job0 00:15:00.566 08:01:06 -- bdevperf/common.sh@9 -- # local rw= 00:15:00.566 08:01:06 -- bdevperf/common.sh@10 -- # local filename= 00:15:00.566 08:01:06 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:15:00.566 08:01:06 -- bdevperf/common.sh@18 -- # job='[job0]' 00:15:00.566 08:01:06 -- bdevperf/common.sh@19 -- # echo 00:15:00.566 08:01:06 -- bdevperf/common.sh@20 -- # cat 00:15:00.566 00:15:00.566 08:01:06 -- bdevperf/test_config.sh@39 -- # create_job job1 00:15:00.566 08:01:06 -- bdevperf/common.sh@8 -- # local job_section=job1 00:15:00.566 08:01:06 -- bdevperf/common.sh@9 -- # local rw= 00:15:00.566 08:01:06 -- bdevperf/common.sh@10 -- # local filename= 00:15:00.566 08:01:06 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:15:00.566 08:01:06 -- bdevperf/common.sh@18 -- # job='[job1]' 00:15:00.566 08:01:06 -- bdevperf/common.sh@19 -- # echo 00:15:00.566 08:01:06 -- bdevperf/common.sh@20 -- # cat 00:15:00.566 00:15:00.566 08:01:06 -- bdevperf/test_config.sh@40 -- # create_job job2 00:15:00.566 08:01:06 -- bdevperf/common.sh@8 -- # local job_section=job2 00:15:00.566 08:01:06 -- bdevperf/common.sh@9 -- # local rw= 00:15:00.566 08:01:06 -- bdevperf/common.sh@10 -- # local filename= 00:15:00.566 08:01:06 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:15:00.566 08:01:06 -- bdevperf/common.sh@18 -- # job='[job2]' 00:15:00.566 08:01:06 -- bdevperf/common.sh@19 -- # echo 00:15:00.566 08:01:06 -- bdevperf/common.sh@20 -- # cat 00:15:00.566 00:15:00.566 08:01:06 -- bdevperf/test_config.sh@41 -- # create_job job3 00:15:00.566 08:01:06 -- bdevperf/common.sh@8 -- # local job_section=job3 00:15:00.566 08:01:06 -- bdevperf/common.sh@9 -- # local rw= 00:15:00.566 08:01:06 -- bdevperf/common.sh@10 -- # local filename= 00:15:00.566 08:01:06 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:15:00.566 08:01:06 -- bdevperf/common.sh@18 -- # job='[job3]' 00:15:00.566 08:01:06 -- bdevperf/common.sh@19 -- # echo 00:15:00.566 08:01:06 -- bdevperf/common.sh@20 -- # cat 00:15:00.566 08:01:06 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:15:03.098 08:01:08 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-13 08:01:06.239244] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:03.098 [2024-07-13 08:01:06.239438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67302 ] 00:15:03.098 Using job config with 4 jobs 00:15:03.098 [2024-07-13 08:01:06.372463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.098 [2024-07-13 08:01:06.420874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.098 cpumask for '\''job0'\'' is too big 00:15:03.098 cpumask for '\''job1'\'' is too big 00:15:03.098 cpumask for '\''job2'\'' is too big 00:15:03.098 cpumask for '\''job3'\'' is too big 00:15:03.098 Running I/O for 2 seconds... 00:15:03.098 00:15:03.098 Latency(us) 00:15:03.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.098 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.098 Malloc0 : 2.01 51668.31 50.46 0.00 0.00 4952.66 1092.27 8363.64 00:15:03.098 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.098 Malloc1 : 2.01 51658.94 50.45 0.00 0.00 4952.68 1248.30 8363.64 00:15:03.098 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.098 Malloc0 : 2.01 51651.32 50.44 0.00 0.00 4948.76 1022.05 7271.38 00:15:03.098 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.098 Malloc1 : 2.01 51689.60 50.48 0.00 0.00 4944.11 1162.48 7240.17 00:15:03.098 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.098 Malloc0 : 2.01 51681.99 50.47 0.00 0.00 4940.01 1022.05 6210.32 00:15:03.098 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.098 Malloc1 : 2.01 51673.41 50.46 0.00 0.00 4939.29 1178.09 6179.11 00:15:03.098 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.098 Malloc0 : 2.01 51665.91 50.45 0.00 0.00 4935.65 1022.05 5617.37 00:15:03.098 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.098 Malloc1 : 2.01 51657.36 50.45 0.00 0.00 4935.05 1201.49 5586.16 00:15:03.098 =================================================================================================================== 00:15:03.098 Total : 413346.83 403.66 0.00 0.00 4943.52 1022.05 8363.64' 00:15:03.098 08:01:08 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-13 08:01:06.239244] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:03.098 [2024-07-13 08:01:06.239438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67302 ] 00:15:03.098 Using job config with 4 jobs 00:15:03.098 [2024-07-13 08:01:06.372463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.098 [2024-07-13 08:01:06.420874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.098 cpumask for '\''job0'\'' is too big 00:15:03.098 cpumask for '\''job1'\'' is too big 00:15:03.098 cpumask for '\''job2'\'' is too big 00:15:03.098 cpumask for '\''job3'\'' is too big 00:15:03.098 Running I/O for 2 seconds... 00:15:03.098 00:15:03.098 Latency(us) 00:15:03.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.098 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.098 Malloc0 : 2.01 51668.31 50.46 0.00 0.00 4952.66 1092.27 8363.64 00:15:03.098 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.098 Malloc1 : 2.01 51658.94 50.45 0.00 0.00 4952.68 1248.30 8363.64 00:15:03.098 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.098 Malloc0 : 2.01 51651.32 50.44 0.00 0.00 4948.76 1022.05 7271.38 00:15:03.098 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.099 Malloc1 : 2.01 51689.60 50.48 0.00 0.00 4944.11 1162.48 7240.17 00:15:03.099 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.099 Malloc0 : 2.01 51681.99 50.47 0.00 0.00 4940.01 1022.05 6210.32 00:15:03.099 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.099 Malloc1 : 2.01 51673.41 50.46 0.00 0.00 4939.29 1178.09 6179.11 00:15:03.099 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.099 Malloc0 : 2.01 51665.91 50.45 0.00 0.00 4935.65 1022.05 5617.37 00:15:03.099 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.099 Malloc1 : 2.01 51657.36 50.45 0.00 0.00 4935.05 1201.49 5586.16 00:15:03.099 =================================================================================================================== 00:15:03.099 Total : 413346.83 403.66 0.00 0.00 4943.52 1022.05 8363.64' 00:15:03.099 08:01:08 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:15:03.099 08:01:08 -- bdevperf/common.sh@32 -- # echo '[2024-07-13 08:01:06.239244] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:03.099 [2024-07-13 08:01:06.239438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67302 ] 00:15:03.099 Using job config with 4 jobs 00:15:03.099 [2024-07-13 08:01:06.372463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.099 [2024-07-13 08:01:06.420874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.099 cpumask for '\''job0'\'' is too big 00:15:03.099 cpumask for '\''job1'\'' is too big 00:15:03.099 cpumask for '\''job2'\'' is too big 00:15:03.099 cpumask for '\''job3'\'' is too big 00:15:03.099 Running I/O for 2 seconds... 00:15:03.099 00:15:03.099 Latency(us) 00:15:03.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.099 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.099 Malloc0 : 2.01 51668.31 50.46 0.00 0.00 4952.66 1092.27 8363.64 00:15:03.099 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.099 Malloc1 : 2.01 51658.94 50.45 0.00 0.00 4952.68 1248.30 8363.64 00:15:03.099 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.099 Malloc0 : 2.01 51651.32 50.44 0.00 0.00 4948.76 1022.05 7271.38 00:15:03.099 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.099 Malloc1 : 2.01 51689.60 50.48 0.00 0.00 4944.11 1162.48 7240.17 00:15:03.099 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.099 Malloc0 : 2.01 51681.99 50.47 0.00 0.00 4940.01 1022.05 6210.32 00:15:03.099 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.099 Malloc1 : 2.01 51673.41 50.46 0.00 0.00 4939.29 1178.09 6179.11 00:15:03.099 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.099 Malloc0 : 2.01 51665.91 50.45 0.00 0.00 4935.65 1022.05 5617.37 00:15:03.099 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:03.099 Malloc1 : 2.01 51657.36 50.45 0.00 0.00 4935.05 1201.49 5586.16 00:15:03.099 =================================================================================================================== 00:15:03.099 Total : 413346.83 403.66 0.00 0.00 4943.52 1022.05 8363.64' 00:15:03.099 08:01:08 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:15:03.099 08:01:08 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:15:03.099 08:01:08 -- bdevperf/test_config.sh@44 -- # cleanup 00:15:03.099 08:01:08 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:15:03.099 08:01:08 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:15:03.099 ************************************ 00:15:03.099 END TEST bdevperf_config 00:15:03.099 ************************************ 00:15:03.099 00:15:03.099 real 0m10.898s 00:15:03.099 user 0m9.171s 00:15:03.099 sys 0m0.917s 00:15:03.099 08:01:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:03.099 08:01:08 -- common/autotest_common.sh@10 -- # set +x 00:15:03.099 08:01:08 -- spdk/autotest.sh@198 -- # uname -s 00:15:03.099 08:01:08 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:15:03.099 08:01:08 -- spdk/autotest.sh@199 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:15:03.099 08:01:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:03.099 08:01:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:03.099 08:01:08 -- common/autotest_common.sh@10 -- # set +x 00:15:03.099 ************************************ 00:15:03.099 START TEST reactor_set_interrupt 00:15:03.099 ************************************ 00:15:03.099 08:01:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:15:03.358 * Looking for test storage... 00:15:03.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:15:03.358 08:01:08 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:15:03.358 08:01:08 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:15:03.358 08:01:08 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:15:03.358 08:01:08 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:15:03.358 08:01:08 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:15:03.358 08:01:08 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:03.358 08:01:08 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:15:03.358 08:01:08 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:03.358 08:01:08 -- common/autotest_common.sh@34 -- # set -e 00:15:03.358 08:01:08 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:03.358 08:01:08 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:03.358 08:01:08 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:15:03.358 08:01:08 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:15:03.358 08:01:08 -- common/build_config.sh@1 -- # CONFIG_RDMA=y 00:15:03.358 08:01:08 -- common/build_config.sh@2 -- # CONFIG_UNIT_TESTS=y 00:15:03.358 08:01:08 -- common/build_config.sh@3 -- # CONFIG_GOLANG=n 00:15:03.358 08:01:08 -- common/build_config.sh@4 -- # CONFIG_FUSE=n 00:15:03.358 08:01:08 -- common/build_config.sh@5 -- # CONFIG_ISAL=n 00:15:03.358 08:01:08 -- common/build_config.sh@6 -- # CONFIG_VTUNE_DIR= 00:15:03.358 08:01:08 -- common/build_config.sh@7 -- # CONFIG_CUSTOMOCF=n 00:15:03.358 08:01:08 -- common/build_config.sh@8 -- # CONFIG_IPSEC_MB_DIR= 00:15:03.358 08:01:08 -- common/build_config.sh@9 -- # CONFIG_VBDEV_COMPRESS=n 00:15:03.358 08:01:08 -- common/build_config.sh@10 -- # CONFIG_OCF_PATH= 00:15:03.358 08:01:08 -- common/build_config.sh@11 -- # CONFIG_SHARED=n 00:15:03.358 08:01:08 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:15:03.358 08:01:08 -- common/build_config.sh@13 -- # CONFIG_TESTS=y 00:15:03.358 08:01:08 -- common/build_config.sh@14 -- # CONFIG_APPS=y 00:15:03.358 08:01:08 -- common/build_config.sh@15 -- # CONFIG_ISAL_CRYPTO=n 00:15:03.358 08:01:08 -- common/build_config.sh@16 -- # CONFIG_LIBDIR= 00:15:03.358 08:01:08 -- common/build_config.sh@17 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:03.358 08:01:08 -- common/build_config.sh@18 -- # CONFIG_DAOS_DIR= 00:15:03.358 08:01:08 -- common/build_config.sh@19 -- # CONFIG_ISCSI_INITIATOR=n 00:15:03.358 08:01:08 -- common/build_config.sh@20 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:03.358 08:01:08 -- common/build_config.sh@21 -- # CONFIG_ASAN=y 00:15:03.358 08:01:08 -- common/build_config.sh@22 -- # CONFIG_LTO=n 00:15:03.358 08:01:08 -- common/build_config.sh@23 -- # CONFIG_CET=n 00:15:03.358 08:01:08 -- common/build_config.sh@24 -- # CONFIG_FUZZER=n 00:15:03.358 08:01:08 -- common/build_config.sh@25 -- # CONFIG_USDT=n 00:15:03.358 08:01:08 -- common/build_config.sh@26 -- # CONFIG_VTUNE=n 00:15:03.358 08:01:08 -- common/build_config.sh@27 -- # CONFIG_VHOST=y 00:15:03.358 08:01:08 -- common/build_config.sh@28 -- # CONFIG_WPDK_DIR= 00:15:03.358 08:01:08 -- common/build_config.sh@29 -- # CONFIG_UBLK=n 00:15:03.358 08:01:08 -- common/build_config.sh@30 -- # CONFIG_URING=n 00:15:03.358 08:01:08 -- common/build_config.sh@31 -- # CONFIG_SMA=n 00:15:03.358 08:01:08 -- common/build_config.sh@32 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:03.358 08:01:08 -- common/build_config.sh@33 -- # CONFIG_IDXD_KERNEL=n 00:15:03.358 08:01:08 -- common/build_config.sh@34 -- # CONFIG_FC_PATH= 00:15:03.358 08:01:08 -- common/build_config.sh@35 -- # CONFIG_PREFIX=/usr/local 00:15:03.358 08:01:08 -- common/build_config.sh@36 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:15:03.358 08:01:08 -- common/build_config.sh@37 -- # CONFIG_XNVME=n 00:15:03.358 08:01:08 -- common/build_config.sh@38 -- # CONFIG_RDMA_PROV=verbs 00:15:03.358 08:01:08 -- common/build_config.sh@39 -- # CONFIG_RDMA_SET_TOS=y 00:15:03.358 08:01:08 -- common/build_config.sh@40 -- # CONFIG_FUZZER_LIB= 00:15:03.358 08:01:08 -- common/build_config.sh@41 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:03.358 08:01:08 -- common/build_config.sh@42 -- # CONFIG_ARCH=native 00:15:03.358 08:01:08 -- common/build_config.sh@43 -- # CONFIG_PGO_CAPTURE=n 00:15:03.358 08:01:08 -- common/build_config.sh@44 -- # CONFIG_DAOS=y 00:15:03.358 08:01:08 -- common/build_config.sh@45 -- # CONFIG_WERROR=y 00:15:03.358 08:01:08 -- common/build_config.sh@46 -- # CONFIG_DEBUG=y 00:15:03.358 08:01:08 -- common/build_config.sh@47 -- # CONFIG_AVAHI=n 00:15:03.358 08:01:08 -- common/build_config.sh@48 -- # CONFIG_CROSS_PREFIX= 00:15:03.358 08:01:08 -- common/build_config.sh@49 -- # CONFIG_PGO_USE=n 00:15:03.358 08:01:08 -- common/build_config.sh@50 -- # CONFIG_CRYPTO=n 00:15:03.358 08:01:08 -- common/build_config.sh@51 -- # CONFIG_HAVE_ARC4RANDOM=n 00:15:03.358 08:01:08 -- common/build_config.sh@52 -- # CONFIG_OPENSSL_PATH= 00:15:03.358 08:01:08 -- common/build_config.sh@53 -- # CONFIG_EXAMPLES=y 00:15:03.358 08:01:08 -- common/build_config.sh@54 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:15:03.358 08:01:08 -- common/build_config.sh@55 -- # CONFIG_MAX_LCORES= 00:15:03.358 08:01:08 -- common/build_config.sh@56 -- # CONFIG_VIRTIO=y 00:15:03.358 08:01:08 -- common/build_config.sh@57 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:03.358 08:01:08 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB=n 00:15:03.358 08:01:08 -- common/build_config.sh@59 -- # CONFIG_UBSAN=n 00:15:03.358 08:01:08 -- common/build_config.sh@60 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:03.358 08:01:08 -- common/build_config.sh@61 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:15:03.358 08:01:08 -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:15:03.358 08:01:08 -- common/build_config.sh@63 -- # CONFIG_URING_PATH= 00:15:03.358 08:01:08 -- common/build_config.sh@64 -- # CONFIG_NVME_CUSE=y 00:15:03.358 08:01:08 -- common/build_config.sh@65 -- # CONFIG_URING_ZNS=n 00:15:03.358 08:01:08 -- common/build_config.sh@66 -- # CONFIG_VFIO_USER=n 00:15:03.358 08:01:08 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:15:03.358 08:01:08 -- common/build_config.sh@68 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:15:03.358 08:01:08 -- common/build_config.sh@69 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:03.358 08:01:08 -- common/build_config.sh@70 -- # CONFIG_RBD=n 00:15:03.358 08:01:08 -- common/build_config.sh@71 -- # CONFIG_RAID5F=n 00:15:03.358 08:01:08 -- common/build_config.sh@72 -- # CONFIG_VFIO_USER_DIR= 00:15:03.358 08:01:08 -- common/build_config.sh@73 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:03.358 08:01:08 -- common/build_config.sh@74 -- # CONFIG_TSAN=n 00:15:03.358 08:01:08 -- common/build_config.sh@75 -- # CONFIG_IDXD=y 00:15:03.358 08:01:08 -- common/build_config.sh@76 -- # CONFIG_OCF=n 00:15:03.358 08:01:08 -- common/build_config.sh@77 -- # CONFIG_CRYPTO_MLX5=n 00:15:03.358 08:01:08 -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:15:03.358 08:01:08 -- common/build_config.sh@79 -- # CONFIG_COVERAGE=y 00:15:03.358 08:01:08 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:03.358 08:01:08 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:03.358 08:01:08 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:15:03.358 08:01:08 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:15:03.358 08:01:08 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:15:03.358 08:01:08 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:15:03.358 08:01:08 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:15:03.358 08:01:08 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:15:03.358 08:01:08 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:03.358 08:01:08 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:03.358 08:01:08 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:03.358 08:01:08 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:03.358 08:01:08 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:03.358 08:01:08 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:03.358 08:01:08 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:15:03.358 08:01:08 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:03.358 #define SPDK_CONFIG_H 00:15:03.358 #define SPDK_CONFIG_APPS 1 00:15:03.358 #define SPDK_CONFIG_ARCH native 00:15:03.358 #define SPDK_CONFIG_ASAN 1 00:15:03.358 #undef SPDK_CONFIG_AVAHI 00:15:03.358 #undef SPDK_CONFIG_CET 00:15:03.358 #define SPDK_CONFIG_COVERAGE 1 00:15:03.358 #define SPDK_CONFIG_CROSS_PREFIX 00:15:03.358 #undef SPDK_CONFIG_CRYPTO 00:15:03.358 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:03.358 #undef SPDK_CONFIG_CUSTOMOCF 00:15:03.358 #define SPDK_CONFIG_DAOS 1 00:15:03.358 #define SPDK_CONFIG_DAOS_DIR 00:15:03.358 #define SPDK_CONFIG_DEBUG 1 00:15:03.358 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:03.358 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:15:03.358 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:15:03.358 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:15:03.358 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:03.358 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:03.358 #define SPDK_CONFIG_EXAMPLES 1 00:15:03.358 #undef SPDK_CONFIG_FC 00:15:03.358 #define SPDK_CONFIG_FC_PATH 00:15:03.358 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:03.358 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:03.358 #undef SPDK_CONFIG_FUSE 00:15:03.358 #undef SPDK_CONFIG_FUZZER 00:15:03.358 #define SPDK_CONFIG_FUZZER_LIB 00:15:03.358 #undef SPDK_CONFIG_GOLANG 00:15:03.358 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:15:03.358 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:03.358 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:03.358 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:03.358 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:15:03.358 #define SPDK_CONFIG_IDXD 1 00:15:03.358 #undef SPDK_CONFIG_IDXD_KERNEL 00:15:03.358 #undef SPDK_CONFIG_IPSEC_MB 00:15:03.358 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:03.358 #undef SPDK_CONFIG_ISAL 00:15:03.358 #undef SPDK_CONFIG_ISAL_CRYPTO 00:15:03.358 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:15:03.358 #define SPDK_CONFIG_LIBDIR 00:15:03.358 #undef SPDK_CONFIG_LTO 00:15:03.358 #define SPDK_CONFIG_MAX_LCORES 00:15:03.358 #define SPDK_CONFIG_NVME_CUSE 1 00:15:03.358 #undef SPDK_CONFIG_OCF 00:15:03.358 #define SPDK_CONFIG_OCF_PATH 00:15:03.358 #define SPDK_CONFIG_OPENSSL_PATH 00:15:03.358 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:03.358 #undef SPDK_CONFIG_PGO_USE 00:15:03.358 #define SPDK_CONFIG_PREFIX /usr/local 00:15:03.358 #undef SPDK_CONFIG_RAID5F 00:15:03.358 #undef SPDK_CONFIG_RBD 00:15:03.358 #define SPDK_CONFIG_RDMA 1 00:15:03.358 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:03.358 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:03.358 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:15:03.358 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:03.359 #undef SPDK_CONFIG_SHARED 00:15:03.359 #undef SPDK_CONFIG_SMA 00:15:03.359 #define SPDK_CONFIG_TESTS 1 00:15:03.359 #undef SPDK_CONFIG_TSAN 00:15:03.359 #undef SPDK_CONFIG_UBLK 00:15:03.359 #undef SPDK_CONFIG_UBSAN 00:15:03.359 #define SPDK_CONFIG_UNIT_TESTS 1 00:15:03.359 #undef SPDK_CONFIG_URING 00:15:03.359 #define SPDK_CONFIG_URING_PATH 00:15:03.359 #undef SPDK_CONFIG_URING_ZNS 00:15:03.359 #undef SPDK_CONFIG_USDT 00:15:03.359 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:03.359 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:03.359 #undef SPDK_CONFIG_VFIO_USER 00:15:03.359 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:03.359 #define SPDK_CONFIG_VHOST 1 00:15:03.359 #define SPDK_CONFIG_VIRTIO 1 00:15:03.359 #undef SPDK_CONFIG_VTUNE 00:15:03.359 #define SPDK_CONFIG_VTUNE_DIR 00:15:03.359 #define SPDK_CONFIG_WERROR 1 00:15:03.359 #define SPDK_CONFIG_WPDK_DIR 00:15:03.359 #undef SPDK_CONFIG_XNVME 00:15:03.359 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:03.359 08:01:08 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:03.359 08:01:08 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:03.359 08:01:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:03.359 08:01:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:03.359 08:01:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:03.359 08:01:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:03.359 08:01:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:03.359 08:01:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:03.359 08:01:08 -- paths/export.sh@5 -- # export PATH 00:15:03.359 08:01:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:03.359 08:01:08 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:03.359 08:01:08 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:03.359 08:01:08 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:03.359 08:01:08 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:03.359 08:01:08 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:15:03.359 08:01:08 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:15:03.359 08:01:08 -- pm/common@16 -- # TEST_TAG=N/A 00:15:03.359 08:01:08 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:15:03.359 08:01:08 -- common/autotest_common.sh@52 -- # : 1 00:15:03.359 08:01:08 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:15:03.359 08:01:08 -- common/autotest_common.sh@56 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:03.359 08:01:08 -- common/autotest_common.sh@58 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:15:03.359 08:01:08 -- common/autotest_common.sh@60 -- # : 1 00:15:03.359 08:01:08 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:03.359 08:01:08 -- common/autotest_common.sh@62 -- # : 1 00:15:03.359 08:01:08 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:15:03.359 08:01:08 -- common/autotest_common.sh@64 -- # : 00:15:03.359 08:01:08 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:15:03.359 08:01:08 -- common/autotest_common.sh@66 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:15:03.359 08:01:08 -- common/autotest_common.sh@68 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:15:03.359 08:01:08 -- common/autotest_common.sh@70 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:15:03.359 08:01:08 -- common/autotest_common.sh@72 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:03.359 08:01:08 -- common/autotest_common.sh@74 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:15:03.359 08:01:08 -- common/autotest_common.sh@76 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:15:03.359 08:01:08 -- common/autotest_common.sh@78 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:15:03.359 08:01:08 -- common/autotest_common.sh@80 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:15:03.359 08:01:08 -- common/autotest_common.sh@82 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:15:03.359 08:01:08 -- common/autotest_common.sh@84 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:15:03.359 08:01:08 -- common/autotest_common.sh@86 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:15:03.359 08:01:08 -- common/autotest_common.sh@88 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:15:03.359 08:01:08 -- common/autotest_common.sh@90 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:03.359 08:01:08 -- common/autotest_common.sh@92 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:15:03.359 08:01:08 -- common/autotest_common.sh@94 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:15:03.359 08:01:08 -- common/autotest_common.sh@96 -- # : rdma 00:15:03.359 08:01:08 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:03.359 08:01:08 -- common/autotest_common.sh@98 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:15:03.359 08:01:08 -- common/autotest_common.sh@100 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:15:03.359 08:01:08 -- common/autotest_common.sh@102 -- # : 1 00:15:03.359 08:01:08 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:15:03.359 08:01:08 -- common/autotest_common.sh@104 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:15:03.359 08:01:08 -- common/autotest_common.sh@106 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:15:03.359 08:01:08 -- common/autotest_common.sh@108 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:15:03.359 08:01:08 -- common/autotest_common.sh@110 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:15:03.359 08:01:08 -- common/autotest_common.sh@112 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:03.359 08:01:08 -- common/autotest_common.sh@114 -- # : 1 00:15:03.359 08:01:08 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:15:03.359 08:01:08 -- common/autotest_common.sh@116 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:15:03.359 08:01:08 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:15:03.359 08:01:08 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:03.359 08:01:08 -- common/autotest_common.sh@120 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:15:03.359 08:01:08 -- common/autotest_common.sh@122 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:15:03.359 08:01:08 -- common/autotest_common.sh@124 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:15:03.359 08:01:08 -- common/autotest_common.sh@126 -- # : 0 00:15:03.359 08:01:08 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:15:03.359 08:01:08 -- common/autotest_common.sh@128 -- # : 0 00:15:03.359 08:01:09 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:15:03.359 08:01:09 -- common/autotest_common.sh@130 -- # : 0 00:15:03.359 08:01:09 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:15:03.359 08:01:09 -- common/autotest_common.sh@132 -- # : v22.11.4 00:15:03.359 08:01:09 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:15:03.359 08:01:09 -- common/autotest_common.sh@134 -- # : true 00:15:03.359 08:01:09 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:15:03.359 08:01:09 -- common/autotest_common.sh@136 -- # : 0 00:15:03.359 08:01:09 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:15:03.359 08:01:09 -- common/autotest_common.sh@138 -- # : 0 00:15:03.359 08:01:09 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:15:03.359 08:01:09 -- common/autotest_common.sh@140 -- # : 0 00:15:03.359 08:01:09 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:15:03.359 08:01:09 -- common/autotest_common.sh@142 -- # : 0 00:15:03.359 08:01:09 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:15:03.359 08:01:09 -- common/autotest_common.sh@144 -- # : 0 00:15:03.359 08:01:09 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:15:03.359 08:01:09 -- common/autotest_common.sh@146 -- # : 0 00:15:03.359 08:01:09 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:15:03.359 08:01:09 -- common/autotest_common.sh@148 -- # : 00:15:03.359 08:01:09 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:15:03.359 08:01:09 -- common/autotest_common.sh@150 -- # : 0 00:15:03.359 08:01:09 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:15:03.359 08:01:09 -- common/autotest_common.sh@152 -- # : 1 00:15:03.359 08:01:09 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:15:03.359 08:01:09 -- common/autotest_common.sh@154 -- # : 0 00:15:03.359 08:01:09 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:15:03.359 08:01:09 -- common/autotest_common.sh@156 -- # : 0 00:15:03.359 08:01:09 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:15:03.359 08:01:09 -- common/autotest_common.sh@158 -- # : 0 00:15:03.359 08:01:09 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:15:03.359 08:01:09 -- common/autotest_common.sh@160 -- # : 0 00:15:03.359 08:01:09 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:15:03.359 08:01:09 -- common/autotest_common.sh@163 -- # : 00:15:03.359 08:01:09 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:15:03.359 08:01:09 -- common/autotest_common.sh@165 -- # : 0 00:15:03.359 08:01:09 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:15:03.359 08:01:09 -- common/autotest_common.sh@167 -- # : 0 00:15:03.359 08:01:09 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:03.359 08:01:09 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:15:03.359 08:01:09 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:15:03.359 08:01:09 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:15:03.359 08:01:09 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:15:03.359 08:01:09 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:03.359 08:01:09 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:03.359 08:01:09 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:03.359 08:01:09 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:03.359 08:01:09 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:03.359 08:01:09 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:03.359 08:01:09 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:03.359 08:01:09 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:03.359 08:01:09 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:03.359 08:01:09 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:15:03.359 08:01:09 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:03.359 08:01:09 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:03.359 08:01:09 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:03.359 08:01:09 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:03.359 08:01:09 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:03.359 08:01:09 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:15:03.359 08:01:09 -- common/autotest_common.sh@196 -- # cat 00:15:03.359 08:01:09 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:15:03.359 08:01:09 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:03.359 08:01:09 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:03.359 08:01:09 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:03.359 08:01:09 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:03.359 08:01:09 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:15:03.359 08:01:09 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:15:03.359 08:01:09 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:15:03.359 08:01:09 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:15:03.359 08:01:09 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:15:03.359 08:01:09 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:15:03.359 08:01:09 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:15:03.359 08:01:09 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:15:03.359 08:01:09 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:15:03.359 08:01:09 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:15:03.359 08:01:09 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:03.359 08:01:09 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:03.359 08:01:09 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:03.359 08:01:09 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:03.359 08:01:09 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:15:03.359 08:01:09 -- common/autotest_common.sh@249 -- # export valgrind= 00:15:03.359 08:01:09 -- common/autotest_common.sh@249 -- # valgrind= 00:15:03.359 08:01:09 -- common/autotest_common.sh@255 -- # uname -s 00:15:03.360 08:01:09 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:15:03.360 08:01:09 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:15:03.360 08:01:09 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:15:03.360 08:01:09 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:15:03.360 08:01:09 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:15:03.360 08:01:09 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:15:03.360 08:01:09 -- common/autotest_common.sh@265 -- # MAKE=make 00:15:03.360 08:01:09 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:15:03.360 08:01:09 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:15:03.360 08:01:09 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:15:03.360 08:01:09 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:15:03.360 08:01:09 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:15:03.360 08:01:09 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:15:03.360 08:01:09 -- common/autotest_common.sh@309 -- # [[ -z 67382 ]] 00:15:03.360 08:01:09 -- common/autotest_common.sh@309 -- # kill -0 67382 00:15:03.360 08:01:09 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:15:03.360 08:01:09 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:15:03.360 08:01:09 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:15:03.360 08:01:09 -- common/autotest_common.sh@322 -- # local mount target_dir 00:15:03.360 08:01:09 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:15:03.360 08:01:09 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:15:03.360 08:01:09 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:15:03.360 08:01:09 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:15:03.360 08:01:09 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.AMYdJd 00:15:03.360 08:01:09 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:03.360 08:01:09 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:15:03.360 08:01:09 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:15:03.360 08:01:09 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.AMYdJd/tests/interrupt /tmp/spdk.AMYdJd 00:15:03.360 08:01:09 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:15:03.360 08:01:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:15:03.360 08:01:09 -- common/autotest_common.sh@318 -- # df -T 00:15:03.360 08:01:09 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:15:03.360 08:01:09 -- common/autotest_common.sh@352 -- # mounts["$mount"]=devtmpfs 00:15:03.360 08:01:09 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:15:03.360 08:01:09 -- common/autotest_common.sh@353 -- # avails["$mount"]=6267637760 00:15:03.360 08:01:09 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267637760 00:15:03.360 08:01:09 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:15:03.360 08:01:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:15:03.360 08:01:09 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:15:03.360 08:01:09 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:15:03.360 08:01:09 -- common/autotest_common.sh@353 -- # avails["$mount"]=6296928256 00:15:03.360 08:01:09 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6298185728 00:15:03.360 08:01:09 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:15:03.360 08:01:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:15:03.360 08:01:09 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:15:03.360 08:01:09 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:15:03.360 08:01:09 -- common/autotest_common.sh@353 -- # avails["$mount"]=6277181440 00:15:03.360 08:01:09 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6298185728 00:15:03.360 08:01:09 -- common/autotest_common.sh@354 -- # uses["$mount"]=21004288 00:15:03.360 08:01:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:15:03.360 08:01:09 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:15:03.360 08:01:09 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:15:03.360 08:01:09 -- common/autotest_common.sh@353 -- # avails["$mount"]=6298185728 00:15:03.360 08:01:09 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6298185728 00:15:03.360 08:01:09 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:15:03.360 08:01:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:15:03.360 08:01:09 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:15:03.360 08:01:09 -- common/autotest_common.sh@352 -- # fss["$mount"]=xfs 00:15:03.360 08:01:09 -- common/autotest_common.sh@353 -- # avails["$mount"]=12940226560 00:15:03.360 08:01:09 -- common/autotest_common.sh@353 -- # sizes["$mount"]=21463302144 00:15:03.360 08:01:09 -- common/autotest_common.sh@354 -- # uses["$mount"]=8523075584 00:15:03.360 08:01:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:15:03.360 08:01:09 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:15:03.360 08:01:09 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:15:03.360 08:01:09 -- common/autotest_common.sh@353 -- # avails["$mount"]=1259638784 00:15:03.360 08:01:09 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1259638784 00:15:03.360 08:01:09 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:15:03.360 08:01:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:15:03.360 08:01:09 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/centos7-libvirt/output 00:15:03.360 08:01:09 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:15:03.360 08:01:09 -- common/autotest_common.sh@353 -- # avails["$mount"]=96008314880 00:15:03.360 08:01:09 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:15:03.360 08:01:09 -- common/autotest_common.sh@354 -- # uses["$mount"]=3694465024 00:15:03.360 08:01:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:15:03.360 08:01:09 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:15:03.360 08:01:09 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:15:03.360 08:01:09 -- common/autotest_common.sh@353 -- # avails["$mount"]=1259638784 00:15:03.360 08:01:09 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1259638784 00:15:03.360 08:01:09 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:15:03.360 08:01:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:15:03.360 08:01:09 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:15:03.360 * Looking for test storage... 00:15:03.360 08:01:09 -- common/autotest_common.sh@359 -- # local target_space new_size 00:15:03.360 08:01:09 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:15:03.360 08:01:09 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:15:03.360 08:01:09 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:03.360 08:01:09 -- common/autotest_common.sh@363 -- # mount=/ 00:15:03.360 08:01:09 -- common/autotest_common.sh@365 -- # target_space=12940226560 00:15:03.360 08:01:09 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:15:03.360 08:01:09 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:15:03.360 08:01:09 -- common/autotest_common.sh@371 -- # [[ xfs == tmpfs ]] 00:15:03.360 08:01:09 -- common/autotest_common.sh@371 -- # [[ xfs == ramfs ]] 00:15:03.360 08:01:09 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:15:03.360 08:01:09 -- common/autotest_common.sh@372 -- # new_size=10737668096 00:15:03.360 08:01:09 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:15:03.360 08:01:09 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:15:03.360 08:01:09 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:15:03.360 08:01:09 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:15:03.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:15:03.360 08:01:09 -- common/autotest_common.sh@380 -- # return 0 00:15:03.360 08:01:09 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:15:03.360 08:01:09 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:15:03.360 08:01:09 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:03.360 08:01:09 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:03.360 08:01:09 -- common/autotest_common.sh@1672 -- # true 00:15:03.360 08:01:09 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:15:03.360 08:01:09 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:15:03.360 08:01:09 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:15:03.360 08:01:09 -- common/autotest_common.sh@27 -- # exec 00:15:03.360 08:01:09 -- common/autotest_common.sh@29 -- # exec 00:15:03.360 08:01:09 -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:03.360 08:01:09 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:03.360 08:01:09 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:03.360 08:01:09 -- common/autotest_common.sh@18 -- # set -x 00:15:03.360 08:01:09 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:03.360 08:01:09 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:15:03.360 08:01:09 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:15:03.360 08:01:09 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:15:03.360 08:01:09 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:15:03.360 08:01:09 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:15:03.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.360 08:01:09 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:15:03.360 08:01:09 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:15:03.360 08:01:09 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:15:03.360 08:01:09 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.360 08:01:09 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:15:03.360 08:01:09 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=67423 00:15:03.360 08:01:09 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:03.360 08:01:09 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 67423 /var/tmp/spdk.sock 00:15:03.360 08:01:09 -- common/autotest_common.sh@819 -- # '[' -z 67423 ']' 00:15:03.360 08:01:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.360 08:01:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:03.360 08:01:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.360 08:01:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:03.360 08:01:09 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:15:03.360 08:01:09 -- common/autotest_common.sh@10 -- # set +x 00:15:03.619 [2024-07-13 08:01:09.181008] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:03.619 [2024-07-13 08:01:09.181163] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67423 ] 00:15:03.619 [2024-07-13 08:01:09.310648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:03.619 [2024-07-13 08:01:09.354436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.619 [2024-07-13 08:01:09.354569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.619 [2024-07-13 08:01:09.354570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:03.619 [2024-07-13 08:01:09.420794] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:04.185 08:01:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:04.185 08:01:09 -- common/autotest_common.sh@852 -- # return 0 00:15:04.185 08:01:09 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:15:04.185 08:01:09 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:04.444 Malloc0 00:15:04.444 Malloc1 00:15:04.444 Malloc2 00:15:04.444 08:01:10 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:15:04.444 08:01:10 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:15:04.444 08:01:10 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:15:04.444 08:01:10 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:15:04.444 5000+0 records in 00:15:04.444 5000+0 records out 00:15:04.444 10240000 bytes (10 MB) copied, 0.0274958 s, 372 MB/s 00:15:04.444 08:01:10 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:15:04.702 AIO0 00:15:04.702 08:01:10 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 67423 00:15:04.702 08:01:10 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 67423 without_thd 00:15:04.702 08:01:10 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=67423 00:15:04.702 08:01:10 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:15:04.702 08:01:10 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:15:04.702 08:01:10 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:15:04.702 08:01:10 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:15:04.702 08:01:10 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:15:04.702 08:01:10 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:15:04.702 08:01:10 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:15:04.702 08:01:10 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:15:04.702 08:01:10 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:15:04.960 08:01:10 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:15:04.960 08:01:10 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:15:04.960 08:01:10 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:15:04.960 08:01:10 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:15:04.960 08:01:10 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:15:04.960 08:01:10 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:15:04.960 08:01:10 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:15:04.960 08:01:10 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:15:04.960 08:01:10 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:15:04.960 08:01:10 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:15:04.960 spdk_thread ids are 1 on reactor0. 00:15:04.960 08:01:10 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:15:04.960 08:01:10 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:15:04.960 08:01:10 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:15:04.960 08:01:10 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 67423 0 00:15:04.960 08:01:10 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 67423 0 idle 00:15:04.960 08:01:10 -- interrupt/interrupt_common.sh@33 -- # local pid=67423 00:15:04.960 08:01:10 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:15:04.960 08:01:10 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:15:04.960 08:01:10 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:15:04.960 08:01:10 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:15:04.960 08:01:10 -- interrupt/interrupt_common.sh@41 -- # hash top 00:15:04.960 08:01:10 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:15:04.960 08:01:10 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:15:04.960 08:01:10 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 67423 -w 256 00:15:04.960 08:01:10 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:15:05.220 08:01:10 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 67423 root 20 0 20.1t 36796 9972 S 0.0 0.3 0:00.20 reactor_0' 00:15:05.220 08:01:10 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:15:05.220 08:01:10 -- interrupt/interrupt_common.sh@48 -- # echo 67423 root 20 0 20.1t 36796 9972 S 0.0 0.3 0:00.20 reactor_0 00:15:05.220 08:01:10 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:15:05.220 08:01:10 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:15:05.220 08:01:10 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:15:05.220 08:01:10 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:15:05.220 08:01:10 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:15:05.220 08:01:10 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:15:05.220 08:01:10 -- interrupt/interrupt_common.sh@56 -- # return 0 00:15:05.220 08:01:10 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:15:05.220 08:01:10 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 67423 1 00:15:05.220 08:01:10 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 67423 1 idle 00:15:05.220 08:01:10 -- interrupt/interrupt_common.sh@33 -- # local pid=67423 00:15:05.220 08:01:10 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:15:05.220 08:01:10 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:15:05.220 08:01:10 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:15:05.220 08:01:10 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:15:05.220 08:01:10 -- interrupt/interrupt_common.sh@41 -- # hash top 00:15:05.220 08:01:10 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:15:05.220 08:01:10 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:15:05.220 08:01:10 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 67423 -w 256 00:15:05.220 08:01:10 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 67436 root 20 0 20.1t 36796 9972 S 0.0 0.3 0:00.00 reactor_1' 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@48 -- # echo 67436 root 20 0 20.1t 36796 9972 S 0.0 0.3 0:00.00 reactor_1 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@56 -- # return 0 00:15:05.508 08:01:11 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:15:05.508 08:01:11 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 67423 2 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 67423 2 idle 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@33 -- # local pid=67423 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@41 -- # hash top 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 67423 -w 256 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 67437 root 20 0 20.1t 36796 9972 S 0.0 0.3 0:00.00 reactor_2' 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@48 -- # echo 67437 root 20 0 20.1t 36796 9972 S 0.0 0.3 0:00.00 reactor_2 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:15:05.508 08:01:11 -- interrupt/interrupt_common.sh@56 -- # return 0 00:15:05.508 08:01:11 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:15:05.508 08:01:11 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:15:05.508 08:01:11 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:15:05.775 [2024-07-13 08:01:11.505423] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:05.776 08:01:11 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:15:06.033 [2024-07-13 08:01:11.728981] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:15:06.033 [2024-07-13 08:01:11.729402] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:15:06.033 08:01:11 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:15:06.290 [2024-07-13 08:01:11.908912] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:15:06.290 [2024-07-13 08:01:11.909203] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:15:06.290 08:01:11 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:15:06.290 08:01:11 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 67423 0 00:15:06.290 08:01:11 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 67423 0 busy 00:15:06.290 08:01:11 -- interrupt/interrupt_common.sh@33 -- # local pid=67423 00:15:06.290 08:01:11 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:15:06.290 08:01:11 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:15:06.290 08:01:11 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:15:06.290 08:01:11 -- interrupt/interrupt_common.sh@41 -- # hash top 00:15:06.290 08:01:11 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:15:06.290 08:01:11 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:15:06.290 08:01:11 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 67423 -w 256 00:15:06.290 08:01:11 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:15:06.291 08:01:12 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 67423 root 20 0 20.1t 36908 9972 R 93.8 0.3 0:00.57 reactor_0' 00:15:06.291 08:01:12 -- interrupt/interrupt_common.sh@48 -- # echo 67423 root 20 0 20.1t 36908 9972 R 93.8 0.3 0:00.57 reactor_0 00:15:06.291 08:01:12 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:15:06.291 08:01:12 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:15:06.291 08:01:12 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=93.8 00:15:06.291 08:01:12 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=93 00:15:06.291 08:01:12 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:15:06.291 08:01:12 -- interrupt/interrupt_common.sh@51 -- # [[ 93 -lt 70 ]] 00:15:06.291 08:01:12 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:15:06.291 08:01:12 -- interrupt/interrupt_common.sh@56 -- # return 0 00:15:06.291 08:01:12 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:15:06.291 08:01:12 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 67423 2 00:15:06.291 08:01:12 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 67423 2 busy 00:15:06.291 08:01:12 -- interrupt/interrupt_common.sh@33 -- # local pid=67423 00:15:06.291 08:01:12 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:15:06.291 08:01:12 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:15:06.291 08:01:12 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:15:06.291 08:01:12 -- interrupt/interrupt_common.sh@41 -- # hash top 00:15:06.291 08:01:12 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:15:06.291 08:01:12 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:15:06.291 08:01:12 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 67423 -w 256 00:15:06.291 08:01:12 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:15:06.548 08:01:12 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 67437 root 20 0 20.1t 36908 9972 R 93.8 0.3 0:00.34 reactor_2' 00:15:06.548 08:01:12 -- interrupt/interrupt_common.sh@48 -- # echo 67437 root 20 0 20.1t 36908 9972 R 93.8 0.3 0:00.34 reactor_2 00:15:06.548 08:01:12 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:15:06.548 08:01:12 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:15:06.548 08:01:12 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=93.8 00:15:06.548 08:01:12 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=93 00:15:06.548 08:01:12 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:15:06.548 08:01:12 -- interrupt/interrupt_common.sh@51 -- # [[ 93 -lt 70 ]] 00:15:06.548 08:01:12 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:15:06.548 08:01:12 -- interrupt/interrupt_common.sh@56 -- # return 0 00:15:06.548 08:01:12 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:15:06.806 [2024-07-13 08:01:12.472972] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:15:06.806 [2024-07-13 08:01:12.473533] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:15:06.806 08:01:12 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:15:06.806 08:01:12 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 67423 2 00:15:06.806 08:01:12 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 67423 2 idle 00:15:06.806 08:01:12 -- interrupt/interrupt_common.sh@33 -- # local pid=67423 00:15:06.806 08:01:12 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:15:06.806 08:01:12 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:15:06.806 08:01:12 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:15:06.806 08:01:12 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:15:06.806 08:01:12 -- interrupt/interrupt_common.sh@41 -- # hash top 00:15:06.806 08:01:12 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:15:06.806 08:01:12 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:15:06.806 08:01:12 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:15:06.806 08:01:12 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 67423 -w 256 00:15:07.063 08:01:12 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 67437 root 20 0 20.1t 36960 9972 S 0.0 0.3 0:00.56 reactor_2' 00:15:07.063 08:01:12 -- interrupt/interrupt_common.sh@48 -- # echo 67437 root 20 0 20.1t 36960 9972 S 0.0 0.3 0:00.56 reactor_2 00:15:07.063 08:01:12 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:15:07.063 08:01:12 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:15:07.063 08:01:12 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:15:07.063 08:01:12 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:15:07.063 08:01:12 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:15:07.064 08:01:12 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:15:07.064 08:01:12 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:15:07.064 08:01:12 -- interrupt/interrupt_common.sh@56 -- # return 0 00:15:07.064 08:01:12 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:15:07.064 [2024-07-13 08:01:12.793075] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:15:07.064 [2024-07-13 08:01:12.793687] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:15:07.064 08:01:12 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:15:07.064 08:01:12 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:15:07.064 08:01:12 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:15:07.321 [2024-07-13 08:01:12.953284] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:07.321 08:01:12 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 67423 0 00:15:07.321 08:01:12 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 67423 0 idle 00:15:07.321 08:01:12 -- interrupt/interrupt_common.sh@33 -- # local pid=67423 00:15:07.321 08:01:12 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:15:07.321 08:01:12 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:15:07.321 08:01:12 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:15:07.321 08:01:12 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:15:07.321 08:01:12 -- interrupt/interrupt_common.sh@41 -- # hash top 00:15:07.321 08:01:12 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:15:07.321 08:01:12 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:15:07.321 08:01:12 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:15:07.321 08:01:12 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 67423 -w 256 00:15:07.321 08:01:13 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 67423 root 20 0 20.1t 37056 9972 S 0.0 0.3 0:01.28 reactor_0' 00:15:07.321 08:01:13 -- interrupt/interrupt_common.sh@48 -- # echo 67423 root 20 0 20.1t 37056 9972 S 0.0 0.3 0:01.28 reactor_0 00:15:07.321 08:01:13 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:15:07.321 08:01:13 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:15:07.579 08:01:13 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:15:07.579 08:01:13 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:15:07.579 08:01:13 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:15:07.579 08:01:13 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:15:07.579 08:01:13 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:15:07.579 08:01:13 -- interrupt/interrupt_common.sh@56 -- # return 0 00:15:07.579 08:01:13 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:15:07.579 08:01:13 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:15:07.579 08:01:13 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:15:07.579 08:01:13 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 67423 00:15:07.579 08:01:13 -- common/autotest_common.sh@926 -- # '[' -z 67423 ']' 00:15:07.579 08:01:13 -- common/autotest_common.sh@930 -- # kill -0 67423 00:15:07.579 08:01:13 -- common/autotest_common.sh@931 -- # uname 00:15:07.579 08:01:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:07.579 08:01:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67423 00:15:07.579 08:01:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:07.579 killing process with pid 67423 00:15:07.579 08:01:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:07.579 08:01:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67423' 00:15:07.579 08:01:13 -- common/autotest_common.sh@945 -- # kill 67423 00:15:07.579 08:01:13 -- common/autotest_common.sh@950 -- # wait 67423 00:15:07.579 08:01:13 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:15:07.579 08:01:13 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:15:07.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.837 08:01:13 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:15:07.837 08:01:13 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.837 08:01:13 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:15:07.837 08:01:13 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=67568 00:15:07.837 08:01:13 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:07.837 08:01:13 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 67568 /var/tmp/spdk.sock 00:15:07.837 08:01:13 -- common/autotest_common.sh@819 -- # '[' -z 67568 ']' 00:15:07.837 08:01:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.837 08:01:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:07.837 08:01:13 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:15:07.837 08:01:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.837 08:01:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:07.837 08:01:13 -- common/autotest_common.sh@10 -- # set +x 00:15:07.837 [2024-07-13 08:01:13.516371] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:07.837 [2024-07-13 08:01:13.516573] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67568 ] 00:15:08.096 [2024-07-13 08:01:13.650501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:08.096 [2024-07-13 08:01:13.695033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.096 [2024-07-13 08:01:13.695167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.096 [2024-07-13 08:01:13.695166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.096 [2024-07-13 08:01:13.761272] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:08.663 08:01:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:08.663 08:01:14 -- common/autotest_common.sh@852 -- # return 0 00:15:08.663 08:01:14 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:15:08.663 08:01:14 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:08.663 Malloc0 00:15:08.663 Malloc1 00:15:08.663 Malloc2 00:15:08.921 08:01:14 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:15:08.921 08:01:14 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:15:08.921 08:01:14 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:15:08.921 08:01:14 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:15:08.921 5000+0 records in 00:15:08.921 5000+0 records out 00:15:08.921 10240000 bytes (10 MB) copied, 0.0275769 s, 371 MB/s 00:15:08.921 08:01:14 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:15:09.180 AIO0 00:15:09.180 08:01:14 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 67568 00:15:09.180 08:01:14 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 67568 00:15:09.180 08:01:14 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=67568 00:15:09.180 08:01:14 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:15:09.180 08:01:14 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:15:09.180 08:01:14 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:15:09.180 08:01:14 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:15:09.180 08:01:14 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:15:09.180 08:01:14 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:15:09.180 08:01:14 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:15:09.180 08:01:14 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:15:09.180 08:01:14 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:15:09.180 08:01:14 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:15:09.180 08:01:14 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:15:09.180 08:01:14 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:15:09.180 08:01:14 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:15:09.180 08:01:14 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:15:09.180 08:01:14 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:15:09.180 08:01:14 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:15:09.180 08:01:14 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:15:09.180 08:01:14 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:15:09.440 08:01:15 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:15:09.440 spdk_thread ids are 1 on reactor0. 00:15:09.440 08:01:15 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:15:09.440 08:01:15 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:15:09.440 08:01:15 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:15:09.440 08:01:15 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 67568 0 00:15:09.440 08:01:15 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 67568 0 idle 00:15:09.440 08:01:15 -- interrupt/interrupt_common.sh@33 -- # local pid=67568 00:15:09.440 08:01:15 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:15:09.440 08:01:15 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:15:09.440 08:01:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:15:09.440 08:01:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:15:09.440 08:01:15 -- interrupt/interrupt_common.sh@41 -- # hash top 00:15:09.440 08:01:15 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:15:09.440 08:01:15 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:15:09.440 08:01:15 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 67568 -w 256 00:15:09.440 08:01:15 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:15:09.699 08:01:15 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 67568 root 20 0 20.1t 37436 9968 S 0.0 0.3 0:00.21 reactor_0' 00:15:09.699 08:01:15 -- interrupt/interrupt_common.sh@48 -- # echo 67568 root 20 0 20.1t 37436 9968 S 0.0 0.3 0:00.21 reactor_0 00:15:09.699 08:01:15 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:15:09.699 08:01:15 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:15:09.699 08:01:15 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:15:09.699 08:01:15 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:15:09.699 08:01:15 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:15:09.699 08:01:15 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:15:09.699 08:01:15 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:15:09.699 08:01:15 -- interrupt/interrupt_common.sh@56 -- # return 0 00:15:09.699 08:01:15 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:15:09.699 08:01:15 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 67568 1 00:15:09.699 08:01:15 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 67568 1 idle 00:15:09.699 08:01:15 -- interrupt/interrupt_common.sh@33 -- # local pid=67568 00:15:09.699 08:01:15 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:15:09.699 08:01:15 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:15:09.699 08:01:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:15:09.699 08:01:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:15:09.699 08:01:15 -- interrupt/interrupt_common.sh@41 -- # hash top 00:15:09.699 08:01:15 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:15:09.699 08:01:15 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:15:09.699 08:01:15 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 67568 -w 256 00:15:09.699 08:01:15 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 67572 root 20 0 20.1t 37436 9968 S 0.0 0.3 0:00.00 reactor_1' 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@48 -- # echo 67572 root 20 0 20.1t 37436 9968 S 0.0 0.3 0:00.00 reactor_1 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@56 -- # return 0 00:15:09.958 08:01:15 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:15:09.958 08:01:15 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 67568 2 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 67568 2 idle 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@33 -- # local pid=67568 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@41 -- # hash top 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 67568 -w 256 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 67573 root 20 0 20.1t 37436 9968 S 0.0 0.3 0:00.00 reactor_2' 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@48 -- # echo 67573 root 20 0 20.1t 37436 9968 S 0.0 0.3 0:00.00 reactor_2 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:15:09.958 08:01:15 -- interrupt/interrupt_common.sh@56 -- # return 0 00:15:09.958 08:01:15 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:15:09.958 08:01:15 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:15:10.216 [2024-07-13 08:01:15.933495] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:15:10.216 [2024-07-13 08:01:15.933755] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:15:10.216 [2024-07-13 08:01:15.935280] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:15:10.216 08:01:15 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:15:10.474 [2024-07-13 08:01:16.161472] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:15:10.474 [2024-07-13 08:01:16.162492] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:15:10.474 08:01:16 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:15:10.474 08:01:16 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 67568 0 00:15:10.474 08:01:16 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 67568 0 busy 00:15:10.474 08:01:16 -- interrupt/interrupt_common.sh@33 -- # local pid=67568 00:15:10.474 08:01:16 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:15:10.474 08:01:16 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:15:10.474 08:01:16 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:15:10.474 08:01:16 -- interrupt/interrupt_common.sh@41 -- # hash top 00:15:10.474 08:01:16 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:15:10.474 08:01:16 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:15:10.474 08:01:16 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 67568 -w 256 00:15:10.474 08:01:16 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 67568 root 20 0 20.1t 37560 9976 R 99.9 0.3 0:00.61 reactor_0' 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@48 -- # echo 67568 root 20 0 20.1t 37560 9976 R 99.9 0.3 0:00.61 reactor_0 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@56 -- # return 0 00:15:10.732 08:01:16 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:15:10.732 08:01:16 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 67568 2 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 67568 2 busy 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@33 -- # local pid=67568 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@41 -- # hash top 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 67568 -w 256 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 67573 root 20 0 20.1t 37560 9976 R 99.9 0.3 0:00.34 reactor_2' 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@48 -- # echo 67573 root 20 0 20.1t 37560 9976 R 99.9 0.3 0:00.34 reactor_2 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:15:10.732 08:01:16 -- interrupt/interrupt_common.sh@56 -- # return 0 00:15:10.732 08:01:16 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:15:10.990 [2024-07-13 08:01:16.661540] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:15:10.990 [2024-07-13 08:01:16.661763] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:15:10.990 08:01:16 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:15:10.990 08:01:16 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 67568 2 00:15:10.990 08:01:16 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 67568 2 idle 00:15:10.990 08:01:16 -- interrupt/interrupt_common.sh@33 -- # local pid=67568 00:15:10.990 08:01:16 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:15:10.990 08:01:16 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:15:10.990 08:01:16 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:15:10.990 08:01:16 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:15:10.990 08:01:16 -- interrupt/interrupt_common.sh@41 -- # hash top 00:15:10.990 08:01:16 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:15:10.990 08:01:16 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:15:10.990 08:01:16 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 67568 -w 256 00:15:10.990 08:01:16 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:15:11.248 08:01:16 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 67573 root 20 0 20.1t 37604 9976 S 0.0 0.3 0:00.50 reactor_2' 00:15:11.248 08:01:16 -- interrupt/interrupt_common.sh@48 -- # echo 67573 root 20 0 20.1t 37604 9976 S 0.0 0.3 0:00.50 reactor_2 00:15:11.248 08:01:16 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:15:11.248 08:01:16 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:15:11.248 08:01:16 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:15:11.248 08:01:16 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:15:11.248 08:01:16 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:15:11.248 08:01:16 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:15:11.248 08:01:16 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:15:11.248 08:01:16 -- interrupt/interrupt_common.sh@56 -- # return 0 00:15:11.248 08:01:16 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:15:11.248 [2024-07-13 08:01:17.009548] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:15:11.248 [2024-07-13 08:01:17.009905] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:15:11.248 [2024-07-13 08:01:17.009932] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:15:11.248 08:01:17 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:15:11.248 08:01:17 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 67568 0 00:15:11.248 08:01:17 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 67568 0 idle 00:15:11.248 08:01:17 -- interrupt/interrupt_common.sh@33 -- # local pid=67568 00:15:11.248 08:01:17 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:15:11.248 08:01:17 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:15:11.248 08:01:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:15:11.248 08:01:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:15:11.248 08:01:17 -- interrupt/interrupt_common.sh@41 -- # hash top 00:15:11.248 08:01:17 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:15:11.248 08:01:17 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:15:11.248 08:01:17 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:15:11.248 08:01:17 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 67568 -w 256 00:15:11.506 08:01:17 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 67568 root 20 0 20.1t 37660 9976 S 0.0 0.3 0:01.29 reactor_0' 00:15:11.506 08:01:17 -- interrupt/interrupt_common.sh@48 -- # echo 67568 root 20 0 20.1t 37660 9976 S 0.0 0.3 0:01.29 reactor_0 00:15:11.506 08:01:17 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:15:11.506 08:01:17 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:15:11.506 08:01:17 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:15:11.506 08:01:17 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:15:11.506 08:01:17 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:15:11.506 08:01:17 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:15:11.506 08:01:17 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:15:11.506 08:01:17 -- interrupt/interrupt_common.sh@56 -- # return 0 00:15:11.506 08:01:17 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:15:11.506 08:01:17 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:15:11.506 08:01:17 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:15:11.506 08:01:17 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 67568 00:15:11.506 08:01:17 -- common/autotest_common.sh@926 -- # '[' -z 67568 ']' 00:15:11.506 08:01:17 -- common/autotest_common.sh@930 -- # kill -0 67568 00:15:11.506 08:01:17 -- common/autotest_common.sh@931 -- # uname 00:15:11.506 08:01:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:11.506 08:01:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67568 00:15:11.506 killing process with pid 67568 00:15:11.506 08:01:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:11.506 08:01:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:11.506 08:01:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67568' 00:15:11.506 08:01:17 -- common/autotest_common.sh@945 -- # kill 67568 00:15:11.506 08:01:17 -- common/autotest_common.sh@950 -- # wait 67568 00:15:11.764 08:01:17 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:15:11.764 08:01:17 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:15:11.764 ************************************ 00:15:11.764 END TEST reactor_set_interrupt 00:15:11.764 ************************************ 00:15:11.764 00:15:11.764 real 0m8.585s 00:15:11.764 user 0m7.905s 00:15:11.764 sys 0m1.427s 00:15:11.764 08:01:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:11.764 08:01:17 -- common/autotest_common.sh@10 -- # set +x 00:15:11.764 08:01:17 -- spdk/autotest.sh@200 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:15:11.764 08:01:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:11.764 08:01:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:11.764 08:01:17 -- common/autotest_common.sh@10 -- # set +x 00:15:11.764 ************************************ 00:15:11.764 START TEST reap_unregistered_poller 00:15:11.764 ************************************ 00:15:11.764 08:01:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:15:12.023 * Looking for test storage... 00:15:12.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:15:12.023 08:01:17 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:15:12.023 08:01:17 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:15:12.023 08:01:17 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:15:12.023 08:01:17 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:15:12.023 08:01:17 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:15:12.023 08:01:17 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:12.023 08:01:17 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:15:12.023 08:01:17 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:12.023 08:01:17 -- common/autotest_common.sh@34 -- # set -e 00:15:12.023 08:01:17 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:12.023 08:01:17 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:12.023 08:01:17 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:15:12.023 08:01:17 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:15:12.023 08:01:17 -- common/build_config.sh@1 -- # CONFIG_RDMA=y 00:15:12.023 08:01:17 -- common/build_config.sh@2 -- # CONFIG_UNIT_TESTS=y 00:15:12.023 08:01:17 -- common/build_config.sh@3 -- # CONFIG_GOLANG=n 00:15:12.023 08:01:17 -- common/build_config.sh@4 -- # CONFIG_FUSE=n 00:15:12.023 08:01:17 -- common/build_config.sh@5 -- # CONFIG_ISAL=n 00:15:12.023 08:01:17 -- common/build_config.sh@6 -- # CONFIG_VTUNE_DIR= 00:15:12.023 08:01:17 -- common/build_config.sh@7 -- # CONFIG_CUSTOMOCF=n 00:15:12.023 08:01:17 -- common/build_config.sh@8 -- # CONFIG_IPSEC_MB_DIR= 00:15:12.023 08:01:17 -- common/build_config.sh@9 -- # CONFIG_VBDEV_COMPRESS=n 00:15:12.023 08:01:17 -- common/build_config.sh@10 -- # CONFIG_OCF_PATH= 00:15:12.023 08:01:17 -- common/build_config.sh@11 -- # CONFIG_SHARED=n 00:15:12.023 08:01:17 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:15:12.023 08:01:17 -- common/build_config.sh@13 -- # CONFIG_TESTS=y 00:15:12.023 08:01:17 -- common/build_config.sh@14 -- # CONFIG_APPS=y 00:15:12.024 08:01:17 -- common/build_config.sh@15 -- # CONFIG_ISAL_CRYPTO=n 00:15:12.024 08:01:17 -- common/build_config.sh@16 -- # CONFIG_LIBDIR= 00:15:12.024 08:01:17 -- common/build_config.sh@17 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:12.024 08:01:17 -- common/build_config.sh@18 -- # CONFIG_DAOS_DIR= 00:15:12.024 08:01:17 -- common/build_config.sh@19 -- # CONFIG_ISCSI_INITIATOR=n 00:15:12.024 08:01:17 -- common/build_config.sh@20 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:12.024 08:01:17 -- common/build_config.sh@21 -- # CONFIG_ASAN=y 00:15:12.024 08:01:17 -- common/build_config.sh@22 -- # CONFIG_LTO=n 00:15:12.024 08:01:17 -- common/build_config.sh@23 -- # CONFIG_CET=n 00:15:12.024 08:01:17 -- common/build_config.sh@24 -- # CONFIG_FUZZER=n 00:15:12.024 08:01:17 -- common/build_config.sh@25 -- # CONFIG_USDT=n 00:15:12.024 08:01:17 -- common/build_config.sh@26 -- # CONFIG_VTUNE=n 00:15:12.024 08:01:17 -- common/build_config.sh@27 -- # CONFIG_VHOST=y 00:15:12.024 08:01:17 -- common/build_config.sh@28 -- # CONFIG_WPDK_DIR= 00:15:12.024 08:01:17 -- common/build_config.sh@29 -- # CONFIG_UBLK=n 00:15:12.024 08:01:17 -- common/build_config.sh@30 -- # CONFIG_URING=n 00:15:12.024 08:01:17 -- common/build_config.sh@31 -- # CONFIG_SMA=n 00:15:12.024 08:01:17 -- common/build_config.sh@32 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:12.024 08:01:17 -- common/build_config.sh@33 -- # CONFIG_IDXD_KERNEL=n 00:15:12.024 08:01:17 -- common/build_config.sh@34 -- # CONFIG_FC_PATH= 00:15:12.024 08:01:17 -- common/build_config.sh@35 -- # CONFIG_PREFIX=/usr/local 00:15:12.024 08:01:17 -- common/build_config.sh@36 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:15:12.024 08:01:17 -- common/build_config.sh@37 -- # CONFIG_XNVME=n 00:15:12.024 08:01:17 -- common/build_config.sh@38 -- # CONFIG_RDMA_PROV=verbs 00:15:12.024 08:01:17 -- common/build_config.sh@39 -- # CONFIG_RDMA_SET_TOS=y 00:15:12.024 08:01:17 -- common/build_config.sh@40 -- # CONFIG_FUZZER_LIB= 00:15:12.024 08:01:17 -- common/build_config.sh@41 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:12.024 08:01:17 -- common/build_config.sh@42 -- # CONFIG_ARCH=native 00:15:12.024 08:01:17 -- common/build_config.sh@43 -- # CONFIG_PGO_CAPTURE=n 00:15:12.024 08:01:17 -- common/build_config.sh@44 -- # CONFIG_DAOS=y 00:15:12.024 08:01:17 -- common/build_config.sh@45 -- # CONFIG_WERROR=y 00:15:12.024 08:01:17 -- common/build_config.sh@46 -- # CONFIG_DEBUG=y 00:15:12.024 08:01:17 -- common/build_config.sh@47 -- # CONFIG_AVAHI=n 00:15:12.024 08:01:17 -- common/build_config.sh@48 -- # CONFIG_CROSS_PREFIX= 00:15:12.024 08:01:17 -- common/build_config.sh@49 -- # CONFIG_PGO_USE=n 00:15:12.024 08:01:17 -- common/build_config.sh@50 -- # CONFIG_CRYPTO=n 00:15:12.024 08:01:17 -- common/build_config.sh@51 -- # CONFIG_HAVE_ARC4RANDOM=n 00:15:12.024 08:01:17 -- common/build_config.sh@52 -- # CONFIG_OPENSSL_PATH= 00:15:12.024 08:01:17 -- common/build_config.sh@53 -- # CONFIG_EXAMPLES=y 00:15:12.024 08:01:17 -- common/build_config.sh@54 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:15:12.024 08:01:17 -- common/build_config.sh@55 -- # CONFIG_MAX_LCORES= 00:15:12.024 08:01:17 -- common/build_config.sh@56 -- # CONFIG_VIRTIO=y 00:15:12.024 08:01:17 -- common/build_config.sh@57 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:12.024 08:01:17 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB=n 00:15:12.024 08:01:17 -- common/build_config.sh@59 -- # CONFIG_UBSAN=n 00:15:12.024 08:01:17 -- common/build_config.sh@60 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:12.024 08:01:17 -- common/build_config.sh@61 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:15:12.024 08:01:17 -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:15:12.024 08:01:17 -- common/build_config.sh@63 -- # CONFIG_URING_PATH= 00:15:12.024 08:01:17 -- common/build_config.sh@64 -- # CONFIG_NVME_CUSE=y 00:15:12.024 08:01:17 -- common/build_config.sh@65 -- # CONFIG_URING_ZNS=n 00:15:12.024 08:01:17 -- common/build_config.sh@66 -- # CONFIG_VFIO_USER=n 00:15:12.024 08:01:17 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:15:12.024 08:01:17 -- common/build_config.sh@68 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:15:12.024 08:01:17 -- common/build_config.sh@69 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:12.024 08:01:17 -- common/build_config.sh@70 -- # CONFIG_RBD=n 00:15:12.024 08:01:17 -- common/build_config.sh@71 -- # CONFIG_RAID5F=n 00:15:12.024 08:01:17 -- common/build_config.sh@72 -- # CONFIG_VFIO_USER_DIR= 00:15:12.024 08:01:17 -- common/build_config.sh@73 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:12.024 08:01:17 -- common/build_config.sh@74 -- # CONFIG_TSAN=n 00:15:12.024 08:01:17 -- common/build_config.sh@75 -- # CONFIG_IDXD=y 00:15:12.024 08:01:17 -- common/build_config.sh@76 -- # CONFIG_OCF=n 00:15:12.024 08:01:17 -- common/build_config.sh@77 -- # CONFIG_CRYPTO_MLX5=n 00:15:12.024 08:01:17 -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:15:12.024 08:01:17 -- common/build_config.sh@79 -- # CONFIG_COVERAGE=y 00:15:12.024 08:01:17 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:12.024 08:01:17 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:12.024 08:01:17 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:15:12.024 08:01:17 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:15:12.024 08:01:17 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:15:12.024 08:01:17 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:15:12.024 08:01:17 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:15:12.024 08:01:17 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:15:12.024 08:01:17 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:12.024 08:01:17 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:12.024 08:01:17 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:12.024 08:01:17 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:12.024 08:01:17 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:12.024 08:01:17 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:12.024 08:01:17 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:15:12.024 08:01:17 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:12.024 #define SPDK_CONFIG_H 00:15:12.024 #define SPDK_CONFIG_APPS 1 00:15:12.024 #define SPDK_CONFIG_ARCH native 00:15:12.024 #define SPDK_CONFIG_ASAN 1 00:15:12.024 #undef SPDK_CONFIG_AVAHI 00:15:12.024 #undef SPDK_CONFIG_CET 00:15:12.024 #define SPDK_CONFIG_COVERAGE 1 00:15:12.024 #define SPDK_CONFIG_CROSS_PREFIX 00:15:12.024 #undef SPDK_CONFIG_CRYPTO 00:15:12.024 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:12.024 #undef SPDK_CONFIG_CUSTOMOCF 00:15:12.024 #define SPDK_CONFIG_DAOS 1 00:15:12.024 #define SPDK_CONFIG_DAOS_DIR 00:15:12.024 #define SPDK_CONFIG_DEBUG 1 00:15:12.024 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:12.024 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:15:12.024 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:15:12.024 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:15:12.024 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:12.024 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:12.024 #define SPDK_CONFIG_EXAMPLES 1 00:15:12.024 #undef SPDK_CONFIG_FC 00:15:12.024 #define SPDK_CONFIG_FC_PATH 00:15:12.024 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:12.024 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:12.024 #undef SPDK_CONFIG_FUSE 00:15:12.024 #undef SPDK_CONFIG_FUZZER 00:15:12.024 #define SPDK_CONFIG_FUZZER_LIB 00:15:12.024 #undef SPDK_CONFIG_GOLANG 00:15:12.024 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:15:12.024 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:12.024 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:12.024 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:12.024 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:15:12.024 #define SPDK_CONFIG_IDXD 1 00:15:12.024 #undef SPDK_CONFIG_IDXD_KERNEL 00:15:12.024 #undef SPDK_CONFIG_IPSEC_MB 00:15:12.024 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:12.024 #undef SPDK_CONFIG_ISAL 00:15:12.024 #undef SPDK_CONFIG_ISAL_CRYPTO 00:15:12.024 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:15:12.024 #define SPDK_CONFIG_LIBDIR 00:15:12.024 #undef SPDK_CONFIG_LTO 00:15:12.024 #define SPDK_CONFIG_MAX_LCORES 00:15:12.024 #define SPDK_CONFIG_NVME_CUSE 1 00:15:12.024 #undef SPDK_CONFIG_OCF 00:15:12.024 #define SPDK_CONFIG_OCF_PATH 00:15:12.024 #define SPDK_CONFIG_OPENSSL_PATH 00:15:12.024 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:12.024 #undef SPDK_CONFIG_PGO_USE 00:15:12.024 #define SPDK_CONFIG_PREFIX /usr/local 00:15:12.024 #undef SPDK_CONFIG_RAID5F 00:15:12.024 #undef SPDK_CONFIG_RBD 00:15:12.024 #define SPDK_CONFIG_RDMA 1 00:15:12.024 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:12.024 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:12.024 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:15:12.024 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:12.024 #undef SPDK_CONFIG_SHARED 00:15:12.024 #undef SPDK_CONFIG_SMA 00:15:12.024 #define SPDK_CONFIG_TESTS 1 00:15:12.024 #undef SPDK_CONFIG_TSAN 00:15:12.024 #undef SPDK_CONFIG_UBLK 00:15:12.024 #undef SPDK_CONFIG_UBSAN 00:15:12.024 #define SPDK_CONFIG_UNIT_TESTS 1 00:15:12.024 #undef SPDK_CONFIG_URING 00:15:12.024 #define SPDK_CONFIG_URING_PATH 00:15:12.024 #undef SPDK_CONFIG_URING_ZNS 00:15:12.024 #undef SPDK_CONFIG_USDT 00:15:12.024 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:12.024 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:12.024 #undef SPDK_CONFIG_VFIO_USER 00:15:12.024 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:12.024 #define SPDK_CONFIG_VHOST 1 00:15:12.024 #define SPDK_CONFIG_VIRTIO 1 00:15:12.024 #undef SPDK_CONFIG_VTUNE 00:15:12.024 #define SPDK_CONFIG_VTUNE_DIR 00:15:12.024 #define SPDK_CONFIG_WERROR 1 00:15:12.024 #define SPDK_CONFIG_WPDK_DIR 00:15:12.024 #undef SPDK_CONFIG_XNVME 00:15:12.024 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:12.024 08:01:17 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:12.024 08:01:17 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:12.024 08:01:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.024 08:01:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.024 08:01:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.024 08:01:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:12.024 08:01:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:12.024 08:01:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:12.024 08:01:17 -- paths/export.sh@5 -- # export PATH 00:15:12.025 08:01:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:12.025 08:01:17 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:12.025 08:01:17 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:12.025 08:01:17 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:12.025 08:01:17 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:12.025 08:01:17 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:15:12.025 08:01:17 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:15:12.025 08:01:17 -- pm/common@16 -- # TEST_TAG=N/A 00:15:12.025 08:01:17 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:15:12.025 08:01:17 -- common/autotest_common.sh@52 -- # : 1 00:15:12.025 08:01:17 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:15:12.025 08:01:17 -- common/autotest_common.sh@56 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:12.025 08:01:17 -- common/autotest_common.sh@58 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:15:12.025 08:01:17 -- common/autotest_common.sh@60 -- # : 1 00:15:12.025 08:01:17 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:12.025 08:01:17 -- common/autotest_common.sh@62 -- # : 1 00:15:12.025 08:01:17 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:15:12.025 08:01:17 -- common/autotest_common.sh@64 -- # : 00:15:12.025 08:01:17 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:15:12.025 08:01:17 -- common/autotest_common.sh@66 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:15:12.025 08:01:17 -- common/autotest_common.sh@68 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:15:12.025 08:01:17 -- common/autotest_common.sh@70 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:15:12.025 08:01:17 -- common/autotest_common.sh@72 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:12.025 08:01:17 -- common/autotest_common.sh@74 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:15:12.025 08:01:17 -- common/autotest_common.sh@76 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:15:12.025 08:01:17 -- common/autotest_common.sh@78 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:15:12.025 08:01:17 -- common/autotest_common.sh@80 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:15:12.025 08:01:17 -- common/autotest_common.sh@82 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:15:12.025 08:01:17 -- common/autotest_common.sh@84 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:15:12.025 08:01:17 -- common/autotest_common.sh@86 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:15:12.025 08:01:17 -- common/autotest_common.sh@88 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:15:12.025 08:01:17 -- common/autotest_common.sh@90 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:12.025 08:01:17 -- common/autotest_common.sh@92 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:15:12.025 08:01:17 -- common/autotest_common.sh@94 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:15:12.025 08:01:17 -- common/autotest_common.sh@96 -- # : rdma 00:15:12.025 08:01:17 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:12.025 08:01:17 -- common/autotest_common.sh@98 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:15:12.025 08:01:17 -- common/autotest_common.sh@100 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:15:12.025 08:01:17 -- common/autotest_common.sh@102 -- # : 1 00:15:12.025 08:01:17 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:15:12.025 08:01:17 -- common/autotest_common.sh@104 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:15:12.025 08:01:17 -- common/autotest_common.sh@106 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:15:12.025 08:01:17 -- common/autotest_common.sh@108 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:15:12.025 08:01:17 -- common/autotest_common.sh@110 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:15:12.025 08:01:17 -- common/autotest_common.sh@112 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:12.025 08:01:17 -- common/autotest_common.sh@114 -- # : 1 00:15:12.025 08:01:17 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:15:12.025 08:01:17 -- common/autotest_common.sh@116 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:15:12.025 08:01:17 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:15:12.025 08:01:17 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:12.025 08:01:17 -- common/autotest_common.sh@120 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:15:12.025 08:01:17 -- common/autotest_common.sh@122 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:15:12.025 08:01:17 -- common/autotest_common.sh@124 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:15:12.025 08:01:17 -- common/autotest_common.sh@126 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:15:12.025 08:01:17 -- common/autotest_common.sh@128 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:15:12.025 08:01:17 -- common/autotest_common.sh@130 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:15:12.025 08:01:17 -- common/autotest_common.sh@132 -- # : v22.11.4 00:15:12.025 08:01:17 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:15:12.025 08:01:17 -- common/autotest_common.sh@134 -- # : true 00:15:12.025 08:01:17 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:15:12.025 08:01:17 -- common/autotest_common.sh@136 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:15:12.025 08:01:17 -- common/autotest_common.sh@138 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:15:12.025 08:01:17 -- common/autotest_common.sh@140 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:15:12.025 08:01:17 -- common/autotest_common.sh@142 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:15:12.025 08:01:17 -- common/autotest_common.sh@144 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:15:12.025 08:01:17 -- common/autotest_common.sh@146 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:15:12.025 08:01:17 -- common/autotest_common.sh@148 -- # : 00:15:12.025 08:01:17 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:15:12.025 08:01:17 -- common/autotest_common.sh@150 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:15:12.025 08:01:17 -- common/autotest_common.sh@152 -- # : 1 00:15:12.025 08:01:17 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:15:12.025 08:01:17 -- common/autotest_common.sh@154 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:15:12.025 08:01:17 -- common/autotest_common.sh@156 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:15:12.025 08:01:17 -- common/autotest_common.sh@158 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:15:12.025 08:01:17 -- common/autotest_common.sh@160 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:15:12.025 08:01:17 -- common/autotest_common.sh@163 -- # : 00:15:12.025 08:01:17 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:15:12.025 08:01:17 -- common/autotest_common.sh@165 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:15:12.025 08:01:17 -- common/autotest_common.sh@167 -- # : 0 00:15:12.025 08:01:17 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:12.025 08:01:17 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:15:12.025 08:01:17 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:15:12.025 08:01:17 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:15:12.025 08:01:17 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:15:12.025 08:01:17 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:12.025 08:01:17 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:12.025 08:01:17 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:12.025 08:01:17 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:12.025 08:01:17 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:12.025 08:01:17 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:12.025 08:01:17 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:12.025 08:01:17 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:12.025 08:01:17 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:12.025 08:01:17 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:15:12.025 08:01:17 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:12.025 08:01:17 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:12.026 08:01:17 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:12.026 08:01:17 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:12.026 08:01:17 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:12.026 08:01:17 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:15:12.026 08:01:17 -- common/autotest_common.sh@196 -- # cat 00:15:12.026 08:01:17 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:15:12.026 08:01:17 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:12.026 08:01:17 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:12.026 08:01:17 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:12.026 08:01:17 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:12.026 08:01:17 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:15:12.026 08:01:17 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:15:12.026 08:01:17 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:15:12.026 08:01:17 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:15:12.026 08:01:17 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:15:12.026 08:01:17 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:15:12.026 08:01:17 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:15:12.026 08:01:17 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:15:12.026 08:01:17 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:15:12.026 08:01:17 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:15:12.026 08:01:17 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:12.026 08:01:17 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:12.026 08:01:17 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:12.026 08:01:17 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:12.026 08:01:17 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:15:12.026 08:01:17 -- common/autotest_common.sh@249 -- # export valgrind= 00:15:12.026 08:01:17 -- common/autotest_common.sh@249 -- # valgrind= 00:15:12.026 08:01:17 -- common/autotest_common.sh@255 -- # uname -s 00:15:12.026 08:01:17 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:15:12.026 08:01:17 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:15:12.026 08:01:17 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:15:12.026 08:01:17 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:15:12.026 08:01:17 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:15:12.026 08:01:17 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:15:12.026 08:01:17 -- common/autotest_common.sh@265 -- # MAKE=make 00:15:12.026 08:01:17 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:15:12.026 08:01:17 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:15:12.026 08:01:17 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:15:12.026 08:01:17 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:15:12.026 08:01:17 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:15:12.026 08:01:17 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:15:12.026 08:01:17 -- common/autotest_common.sh@309 -- # [[ -z 67732 ]] 00:15:12.026 08:01:17 -- common/autotest_common.sh@309 -- # kill -0 67732 00:15:12.026 08:01:17 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:15:12.026 08:01:17 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:15:12.026 08:01:17 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:15:12.026 08:01:17 -- common/autotest_common.sh@322 -- # local mount target_dir 00:15:12.026 08:01:17 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:15:12.026 08:01:17 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:15:12.026 08:01:17 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:15:12.026 08:01:17 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:15:12.026 08:01:17 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.8az9Fs 00:15:12.026 08:01:17 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:12.026 08:01:17 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:15:12.026 08:01:17 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:15:12.026 08:01:17 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.8az9Fs/tests/interrupt /tmp/spdk.8az9Fs 00:15:12.026 08:01:17 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:15:12.026 08:01:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:15:12.026 08:01:17 -- common/autotest_common.sh@318 -- # df -T 00:15:12.026 08:01:17 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:15:12.026 08:01:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=devtmpfs 00:15:12.026 08:01:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:15:12.026 08:01:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=6267637760 00:15:12.026 08:01:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267637760 00:15:12.026 08:01:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:15:12.026 08:01:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:15:12.026 08:01:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:15:12.026 08:01:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:15:12.026 08:01:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=6296928256 00:15:12.026 08:01:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6298185728 00:15:12.026 08:01:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:15:12.026 08:01:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:15:12.026 08:01:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:15:12.026 08:01:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:15:12.026 08:01:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=6277181440 00:15:12.026 08:01:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6298185728 00:15:12.026 08:01:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=21004288 00:15:12.026 08:01:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:15:12.026 08:01:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:15:12.026 08:01:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:15:12.026 08:01:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=6298185728 00:15:12.026 08:01:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6298185728 00:15:12.026 08:01:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:15:12.026 08:01:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:15:12.026 08:01:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:15:12.026 08:01:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=xfs 00:15:12.026 08:01:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=12940201984 00:15:12.026 08:01:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=21463302144 00:15:12.026 08:01:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=8523100160 00:15:12.026 08:01:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:15:12.026 08:01:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:15:12.026 08:01:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:15:12.026 08:01:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=1259638784 00:15:12.026 08:01:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1259638784 00:15:12.026 08:01:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:15:12.026 08:01:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:15:12.026 08:01:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/centos7-libvirt/output 00:15:12.026 08:01:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:15:12.026 08:01:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=96008212480 00:15:12.026 08:01:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:15:12.026 08:01:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=3694567424 00:15:12.026 08:01:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:15:12.026 08:01:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:15:12.026 08:01:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:15:12.026 08:01:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=1259638784 00:15:12.026 08:01:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1259638784 00:15:12.026 08:01:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:15:12.026 08:01:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:15:12.026 08:01:17 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:15:12.026 * Looking for test storage... 00:15:12.026 08:01:17 -- common/autotest_common.sh@359 -- # local target_space new_size 00:15:12.026 08:01:17 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:15:12.026 08:01:17 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:15:12.026 08:01:17 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:12.026 08:01:17 -- common/autotest_common.sh@363 -- # mount=/ 00:15:12.026 08:01:17 -- common/autotest_common.sh@365 -- # target_space=12940201984 00:15:12.026 08:01:17 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:15:12.026 08:01:17 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:15:12.026 08:01:17 -- common/autotest_common.sh@371 -- # [[ xfs == tmpfs ]] 00:15:12.026 08:01:17 -- common/autotest_common.sh@371 -- # [[ xfs == ramfs ]] 00:15:12.026 08:01:17 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:15:12.026 08:01:17 -- common/autotest_common.sh@372 -- # new_size=10737692672 00:15:12.026 08:01:17 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:15:12.026 08:01:17 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:15:12.026 08:01:17 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:15:12.026 08:01:17 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:15:12.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:15:12.026 08:01:17 -- common/autotest_common.sh@380 -- # return 0 00:15:12.026 08:01:17 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:15:12.026 08:01:17 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:15:12.026 08:01:17 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:12.026 08:01:17 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:12.026 08:01:17 -- common/autotest_common.sh@1672 -- # true 00:15:12.026 08:01:17 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:15:12.026 08:01:17 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:15:12.026 08:01:17 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:15:12.026 08:01:17 -- common/autotest_common.sh@27 -- # exec 00:15:12.026 08:01:17 -- common/autotest_common.sh@29 -- # exec 00:15:12.026 08:01:17 -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:12.026 08:01:17 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:12.026 08:01:17 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:12.026 08:01:17 -- common/autotest_common.sh@18 -- # set -x 00:15:12.027 08:01:17 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.027 08:01:17 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:15:12.027 08:01:17 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:15:12.027 08:01:17 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:15:12.027 08:01:17 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:15:12.027 08:01:17 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:15:12.027 08:01:17 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:15:12.027 08:01:17 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:15:12.027 08:01:17 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:15:12.027 08:01:17 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.027 08:01:17 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:15:12.027 08:01:17 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=67779 00:15:12.027 08:01:17 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:12.027 08:01:17 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 67779 /var/tmp/spdk.sock 00:15:12.027 08:01:17 -- common/autotest_common.sh@819 -- # '[' -z 67779 ']' 00:15:12.027 08:01:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.027 08:01:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:12.027 08:01:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.027 08:01:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:12.027 08:01:17 -- common/autotest_common.sh@10 -- # set +x 00:15:12.027 08:01:17 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:15:12.285 [2024-07-13 08:01:17.834144] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:12.285 [2024-07-13 08:01:17.834333] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67779 ] 00:15:12.285 [2024-07-13 08:01:17.971197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:12.285 [2024-07-13 08:01:18.016790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.285 [2024-07-13 08:01:18.016953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.285 [2024-07-13 08:01:18.016951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.285 [2024-07-13 08:01:18.083132] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:12.851 08:01:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:12.851 08:01:18 -- common/autotest_common.sh@852 -- # return 0 00:15:12.851 08:01:18 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:15:12.851 08:01:18 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:15:12.851 08:01:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.851 08:01:18 -- common/autotest_common.sh@10 -- # set +x 00:15:12.851 08:01:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:13.109 08:01:18 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:15:13.109 "name": "app_thread", 00:15:13.109 "id": 1, 00:15:13.109 "active_pollers": [], 00:15:13.109 "timed_pollers": [ 00:15:13.109 { 00:15:13.109 "name": "rpc_subsystem_poll", 00:15:13.109 "id": 1, 00:15:13.109 "state": "waiting", 00:15:13.109 "run_count": 0, 00:15:13.109 "busy_count": 0, 00:15:13.109 "period_ticks": 8400000 00:15:13.109 } 00:15:13.109 ], 00:15:13.109 "paused_pollers": [] 00:15:13.109 }' 00:15:13.109 08:01:18 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:15:13.109 08:01:18 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:15:13.109 08:01:18 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:15:13.109 08:01:18 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:15:13.109 08:01:18 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:15:13.109 08:01:18 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:15:13.109 08:01:18 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:15:13.109 08:01:18 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:15:13.109 08:01:18 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:15:13.109 5000+0 records in 00:15:13.109 5000+0 records out 00:15:13.109 10240000 bytes (10 MB) copied, 0.0251812 s, 407 MB/s 00:15:13.109 08:01:18 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:15:13.367 AIO0 00:15:13.367 08:01:19 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:13.625 08:01:19 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:15:13.625 08:01:19 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:15:13.625 08:01:19 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:15:13.625 08:01:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:13.625 08:01:19 -- common/autotest_common.sh@10 -- # set +x 00:15:13.625 08:01:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:13.625 08:01:19 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:15:13.625 "name": "app_thread", 00:15:13.625 "id": 1, 00:15:13.625 "active_pollers": [], 00:15:13.625 "timed_pollers": [ 00:15:13.625 { 00:15:13.625 "name": "rpc_subsystem_poll", 00:15:13.625 "id": 1, 00:15:13.625 "state": "waiting", 00:15:13.625 "run_count": 0, 00:15:13.625 "busy_count": 0, 00:15:13.625 "period_ticks": 8400000 00:15:13.625 } 00:15:13.625 ], 00:15:13.625 "paused_pollers": [] 00:15:13.625 }' 00:15:13.625 08:01:19 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:15:13.884 08:01:19 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:15:13.884 08:01:19 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:15:13.884 08:01:19 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:15:13.884 08:01:19 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:15:13.884 08:01:19 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:15:13.884 08:01:19 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:13.884 08:01:19 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 67779 00:15:13.884 08:01:19 -- common/autotest_common.sh@926 -- # '[' -z 67779 ']' 00:15:13.884 08:01:19 -- common/autotest_common.sh@930 -- # kill -0 67779 00:15:13.884 08:01:19 -- common/autotest_common.sh@931 -- # uname 00:15:13.884 08:01:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:13.884 08:01:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67779 00:15:13.884 killing process with pid 67779 00:15:13.884 08:01:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:13.884 08:01:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:13.884 08:01:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67779' 00:15:13.884 08:01:19 -- common/autotest_common.sh@945 -- # kill 67779 00:15:13.884 08:01:19 -- common/autotest_common.sh@950 -- # wait 67779 00:15:14.143 08:01:19 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:15:14.143 08:01:19 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:15:14.143 00:15:14.143 real 0m2.246s 00:15:14.143 user 0m1.405s 00:15:14.143 sys 0m0.465s 00:15:14.143 ************************************ 00:15:14.143 END TEST reap_unregistered_poller 00:15:14.143 ************************************ 00:15:14.143 08:01:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:14.143 08:01:19 -- common/autotest_common.sh@10 -- # set +x 00:15:14.143 08:01:19 -- spdk/autotest.sh@204 -- # uname -s 00:15:14.143 08:01:19 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:15:14.143 08:01:19 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:15:14.143 08:01:19 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:15:14.143 08:01:19 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:15:14.143 08:01:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:14.143 08:01:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:14.143 08:01:19 -- common/autotest_common.sh@10 -- # set +x 00:15:14.143 ************************************ 00:15:14.143 START TEST spdk_dd 00:15:14.143 ************************************ 00:15:14.143 08:01:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:15:14.143 * Looking for test storage... 00:15:14.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:15:14.143 08:01:19 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:14.143 08:01:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.143 08:01:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.143 08:01:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.143 08:01:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:14.143 08:01:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:14.143 08:01:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:14.143 08:01:19 -- paths/export.sh@5 -- # export PATH 00:15:14.143 08:01:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:14.143 08:01:19 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:14.402 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:15:14.402 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:14.402 08:01:20 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:15:14.402 08:01:20 -- dd/dd.sh@11 -- # nvme_in_userspace 00:15:14.403 08:01:20 -- scripts/common.sh@311 -- # local bdf bdfs 00:15:14.403 08:01:20 -- scripts/common.sh@312 -- # local nvmes 00:15:14.403 08:01:20 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:15:14.403 08:01:20 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:15:14.403 08:01:20 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:15:14.403 08:01:20 -- scripts/common.sh@297 -- # local bdf= 00:15:14.403 08:01:20 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:15:14.403 08:01:20 -- scripts/common.sh@232 -- # local class 00:15:14.403 08:01:20 -- scripts/common.sh@233 -- # local subclass 00:15:14.403 08:01:20 -- scripts/common.sh@234 -- # local progif 00:15:14.403 08:01:20 -- scripts/common.sh@235 -- # printf %02x 1 00:15:14.403 08:01:20 -- scripts/common.sh@235 -- # class=01 00:15:14.403 08:01:20 -- scripts/common.sh@236 -- # printf %02x 8 00:15:14.403 08:01:20 -- scripts/common.sh@236 -- # subclass=08 00:15:14.403 08:01:20 -- scripts/common.sh@237 -- # printf %02x 2 00:15:14.403 08:01:20 -- scripts/common.sh@237 -- # progif=02 00:15:14.403 08:01:20 -- scripts/common.sh@239 -- # hash lspci 00:15:14.403 08:01:20 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:15:14.403 08:01:20 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:15:14.403 08:01:20 -- scripts/common.sh@242 -- # grep -i -- -p02 00:15:14.403 08:01:20 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:15:14.403 08:01:20 -- scripts/common.sh@244 -- # tr -d '"' 00:15:14.403 08:01:20 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:14.403 08:01:20 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:15:14.403 08:01:20 -- scripts/common.sh@15 -- # local i 00:15:14.403 08:01:20 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:15:14.403 08:01:20 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:15:14.403 08:01:20 -- scripts/common.sh@24 -- # return 0 00:15:14.403 08:01:20 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:15:14.403 08:01:20 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:15:14.403 08:01:20 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:15:14.403 08:01:20 -- scripts/common.sh@322 -- # uname -s 00:15:14.403 08:01:20 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:15:14.403 08:01:20 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:15:14.403 08:01:20 -- scripts/common.sh@327 -- # (( 1 )) 00:15:14.403 08:01:20 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:15:14.403 08:01:20 -- dd/dd.sh@13 -- # check_liburing 00:15:14.403 08:01:20 -- dd/common.sh@139 -- # local lib so 00:15:14.403 08:01:20 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:15:14.403 08:01:20 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libssl.so.1.1 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libdl.so.2 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ librt.so.1 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libcrypto.so.1.1 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libdaos.so.2 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libdaos_common.so == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libdfs.so == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libgurt.so.4 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libpthread.so.0 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libz.so.1 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libisal.so.2 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libcart.so.4 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ liblz4.so.1 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libprotobuf-c.so.1 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libyaml-0.so.2 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libmercury_hl.so.2 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libmercury.so.2 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libmercury_util.so.2 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libna.so.2 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libfabric.so.1 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/common.sh@143 -- # [[ libpsm2.so.2 == liburing.so.* ]] 00:15:14.403 08:01:20 -- dd/common.sh@142 -- # read -r lib _ so _ 00:15:14.403 08:01:20 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:15:14.403 08:01:20 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:15:14.403 08:01:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:14.403 08:01:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:14.403 08:01:20 -- common/autotest_common.sh@10 -- # set +x 00:15:14.403 ************************************ 00:15:14.403 START TEST spdk_dd_basic_rw 00:15:14.403 ************************************ 00:15:14.403 08:01:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:15:14.662 * Looking for test storage... 00:15:14.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:15:14.662 08:01:20 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:14.662 08:01:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.662 08:01:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.662 08:01:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.662 08:01:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:14.662 08:01:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:14.662 08:01:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:14.662 08:01:20 -- paths/export.sh@5 -- # export PATH 00:15:14.662 08:01:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:14.662 08:01:20 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:15:14.662 08:01:20 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:15:14.662 08:01:20 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:15:14.662 08:01:20 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:15:14.662 08:01:20 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:15:14.662 08:01:20 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:15:14.662 08:01:20 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:15:14.662 08:01:20 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:14.662 08:01:20 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:14.662 08:01:20 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:15:14.662 08:01:20 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:15:14.662 08:01:20 -- dd/common.sh@126 -- # mapfile -t id 00:15:14.662 08:01:20 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:15:14.923 08:01:20 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 80 Data Units Written: 204 Host Read Commands: 1596 Host Write Commands: 308 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:15:14.923 08:01:20 -- dd/common.sh@130 -- # lbaf=04 00:15:14.923 08:01:20 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 80 Data Units Written: 204 Host Read Commands: 1596 Host Write Commands: 308 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:15:14.923 08:01:20 -- dd/common.sh@132 -- # lbaf=4096 00:15:14.923 08:01:20 -- dd/common.sh@134 -- # echo 4096 00:15:14.923 08:01:20 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:15:14.923 08:01:20 -- dd/basic_rw.sh@96 -- # : 00:15:14.923 08:01:20 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:15:14.923 08:01:20 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:15:14.923 08:01:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:14.923 08:01:20 -- common/autotest_common.sh@10 -- # set +x 00:15:14.923 08:01:20 -- dd/basic_rw.sh@96 -- # gen_conf 00:15:14.923 08:01:20 -- dd/common.sh@31 -- # xtrace_disable 00:15:14.923 08:01:20 -- common/autotest_common.sh@10 -- # set +x 00:15:14.923 ************************************ 00:15:14.923 START TEST dd_bs_lt_native_bs 00:15:14.923 ************************************ 00:15:14.923 08:01:20 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:15:14.923 08:01:20 -- common/autotest_common.sh@640 -- # local es=0 00:15:14.923 08:01:20 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:15:14.923 08:01:20 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:14.923 08:01:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:14.923 08:01:20 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:14.923 08:01:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:14.923 08:01:20 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:14.923 08:01:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:14.923 08:01:20 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:14.923 08:01:20 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:14.923 08:01:20 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:15:14.923 { 00:15:14.923 "subsystems": [ 00:15:14.923 { 00:15:14.923 "subsystem": "bdev", 00:15:14.923 "config": [ 00:15:14.923 { 00:15:14.923 "params": { 00:15:14.923 "trtype": "pcie", 00:15:14.923 "name": "Nvme0", 00:15:14.923 "traddr": "0000:00:06.0" 00:15:14.923 }, 00:15:14.923 "method": "bdev_nvme_attach_controller" 00:15:14.923 }, 00:15:14.924 { 00:15:14.924 "method": "bdev_wait_for_examine" 00:15:14.924 } 00:15:14.924 ] 00:15:14.924 } 00:15:14.924 ] 00:15:14.924 } 00:15:14.924 [2024-07-13 08:01:20.710603] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:14.924 [2024-07-13 08:01:20.710847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68042 ] 00:15:15.190 [2024-07-13 08:01:20.864028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.190 [2024-07-13 08:01:20.914019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.452 [2024-07-13 08:01:21.069725] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:15:15.452 [2024-07-13 08:01:21.069858] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:15.452 [2024-07-13 08:01:21.179293] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:15:15.710 ************************************ 00:15:15.710 END TEST dd_bs_lt_native_bs 00:15:15.710 ************************************ 00:15:15.710 08:01:21 -- common/autotest_common.sh@643 -- # es=234 00:15:15.710 08:01:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:15.710 08:01:21 -- common/autotest_common.sh@652 -- # es=106 00:15:15.710 08:01:21 -- common/autotest_common.sh@653 -- # case "$es" in 00:15:15.710 08:01:21 -- common/autotest_common.sh@660 -- # es=1 00:15:15.710 08:01:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:15.710 00:15:15.710 real 0m0.697s 00:15:15.710 user 0m0.344s 00:15:15.710 sys 0m0.205s 00:15:15.710 08:01:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:15.710 08:01:21 -- common/autotest_common.sh@10 -- # set +x 00:15:15.710 08:01:21 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:15:15.710 08:01:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:15.710 08:01:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:15.710 08:01:21 -- common/autotest_common.sh@10 -- # set +x 00:15:15.710 ************************************ 00:15:15.710 START TEST dd_rw 00:15:15.710 ************************************ 00:15:15.710 08:01:21 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:15:15.710 08:01:21 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:15:15.710 08:01:21 -- dd/basic_rw.sh@12 -- # local count size 00:15:15.710 08:01:21 -- dd/basic_rw.sh@13 -- # local qds bss 00:15:15.710 08:01:21 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:15:15.710 08:01:21 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:15:15.710 08:01:21 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:15:15.710 08:01:21 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:15:15.710 08:01:21 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:15:15.710 08:01:21 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:15:15.710 08:01:21 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:15:15.710 08:01:21 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:15:15.710 08:01:21 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:15:15.710 08:01:21 -- dd/basic_rw.sh@23 -- # count=15 00:15:15.710 08:01:21 -- dd/basic_rw.sh@24 -- # count=15 00:15:15.710 08:01:21 -- dd/basic_rw.sh@25 -- # size=61440 00:15:15.710 08:01:21 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:15:15.710 08:01:21 -- dd/common.sh@98 -- # xtrace_disable 00:15:15.710 08:01:21 -- common/autotest_common.sh@10 -- # set +x 00:15:16.277 08:01:21 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:15:16.277 08:01:21 -- dd/basic_rw.sh@30 -- # gen_conf 00:15:16.277 08:01:21 -- dd/common.sh@31 -- # xtrace_disable 00:15:16.277 08:01:21 -- common/autotest_common.sh@10 -- # set +x 00:15:16.277 { 00:15:16.277 "subsystems": [ 00:15:16.277 { 00:15:16.277 "subsystem": "bdev", 00:15:16.277 "config": [ 00:15:16.277 { 00:15:16.277 "params": { 00:15:16.277 "trtype": "pcie", 00:15:16.277 "name": "Nvme0", 00:15:16.277 "traddr": "0000:00:06.0" 00:15:16.277 }, 00:15:16.277 "method": "bdev_nvme_attach_controller" 00:15:16.277 }, 00:15:16.277 { 00:15:16.277 "method": "bdev_wait_for_examine" 00:15:16.277 } 00:15:16.277 ] 00:15:16.277 } 00:15:16.277 ] 00:15:16.277 } 00:15:16.277 [2024-07-13 08:01:22.059233] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:16.277 [2024-07-13 08:01:22.059392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68081 ] 00:15:16.535 [2024-07-13 08:01:22.186034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.535 [2024-07-13 08:01:22.231432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.794  Copying: 60/60 [kB] (average 29 MBps) 00:15:16.794 00:15:16.794 08:01:22 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:15:16.794 08:01:22 -- dd/basic_rw.sh@37 -- # gen_conf 00:15:16.794 08:01:22 -- dd/common.sh@31 -- # xtrace_disable 00:15:16.794 08:01:22 -- common/autotest_common.sh@10 -- # set +x 00:15:17.051 { 00:15:17.051 "subsystems": [ 00:15:17.051 { 00:15:17.051 "subsystem": "bdev", 00:15:17.051 "config": [ 00:15:17.051 { 00:15:17.051 "params": { 00:15:17.051 "trtype": "pcie", 00:15:17.051 "name": "Nvme0", 00:15:17.051 "traddr": "0000:00:06.0" 00:15:17.051 }, 00:15:17.051 "method": "bdev_nvme_attach_controller" 00:15:17.051 }, 00:15:17.051 { 00:15:17.051 "method": "bdev_wait_for_examine" 00:15:17.051 } 00:15:17.051 ] 00:15:17.051 } 00:15:17.051 ] 00:15:17.051 } 00:15:17.051 [2024-07-13 08:01:22.714034] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:17.051 [2024-07-13 08:01:22.714304] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68103 ] 00:15:17.051 [2024-07-13 08:01:22.843270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.309 [2024-07-13 08:01:22.891645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.574  Copying: 60/60 [kB] (average 19 MBps) 00:15:17.574 00:15:17.574 08:01:23 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:17.574 08:01:23 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:15:17.574 08:01:23 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:15:17.574 08:01:23 -- dd/common.sh@11 -- # local nvme_ref= 00:15:17.574 08:01:23 -- dd/common.sh@12 -- # local size=61440 00:15:17.574 08:01:23 -- dd/common.sh@14 -- # local bs=1048576 00:15:17.574 08:01:23 -- dd/common.sh@15 -- # local count=1 00:15:17.574 08:01:23 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:15:17.574 08:01:23 -- dd/common.sh@18 -- # gen_conf 00:15:17.574 08:01:23 -- dd/common.sh@31 -- # xtrace_disable 00:15:17.574 08:01:23 -- common/autotest_common.sh@10 -- # set +x 00:15:17.574 { 00:15:17.574 "subsystems": [ 00:15:17.574 { 00:15:17.574 "subsystem": "bdev", 00:15:17.574 "config": [ 00:15:17.574 { 00:15:17.574 "params": { 00:15:17.574 "trtype": "pcie", 00:15:17.574 "name": "Nvme0", 00:15:17.574 "traddr": "0000:00:06.0" 00:15:17.574 }, 00:15:17.574 "method": "bdev_nvme_attach_controller" 00:15:17.574 }, 00:15:17.574 { 00:15:17.574 "method": "bdev_wait_for_examine" 00:15:17.574 } 00:15:17.574 ] 00:15:17.574 } 00:15:17.574 ] 00:15:17.574 } 00:15:17.574 [2024-07-13 08:01:23.381371] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:17.574 [2024-07-13 08:01:23.381560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68112 ] 00:15:17.836 [2024-07-13 08:01:23.509852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.836 [2024-07-13 08:01:23.555937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.094  Copying: 1024/1024 [kB] (average 500 MBps) 00:15:18.094 00:15:18.094 08:01:23 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:15:18.094 08:01:23 -- dd/basic_rw.sh@23 -- # count=15 00:15:18.094 08:01:23 -- dd/basic_rw.sh@24 -- # count=15 00:15:18.094 08:01:23 -- dd/basic_rw.sh@25 -- # size=61440 00:15:18.094 08:01:23 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:15:18.094 08:01:23 -- dd/common.sh@98 -- # xtrace_disable 00:15:18.094 08:01:23 -- common/autotest_common.sh@10 -- # set +x 00:15:19.030 08:01:24 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:15:19.030 08:01:24 -- dd/basic_rw.sh@30 -- # gen_conf 00:15:19.030 08:01:24 -- dd/common.sh@31 -- # xtrace_disable 00:15:19.030 08:01:24 -- common/autotest_common.sh@10 -- # set +x 00:15:19.030 { 00:15:19.030 "subsystems": [ 00:15:19.030 { 00:15:19.030 "subsystem": "bdev", 00:15:19.030 "config": [ 00:15:19.030 { 00:15:19.030 "params": { 00:15:19.030 "trtype": "pcie", 00:15:19.030 "name": "Nvme0", 00:15:19.030 "traddr": "0000:00:06.0" 00:15:19.030 }, 00:15:19.030 "method": "bdev_nvme_attach_controller" 00:15:19.030 }, 00:15:19.030 { 00:15:19.030 "method": "bdev_wait_for_examine" 00:15:19.030 } 00:15:19.030 ] 00:15:19.030 } 00:15:19.030 ] 00:15:19.030 } 00:15:19.030 [2024-07-13 08:01:24.658291] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:19.030 [2024-07-13 08:01:24.658484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68143 ] 00:15:19.030 [2024-07-13 08:01:24.793159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.030 [2024-07-13 08:01:24.837067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.545  Copying: 60/60 [kB] (average 58 MBps) 00:15:19.545 00:15:19.545 08:01:25 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:15:19.545 08:01:25 -- dd/basic_rw.sh@37 -- # gen_conf 00:15:19.545 08:01:25 -- dd/common.sh@31 -- # xtrace_disable 00:15:19.545 08:01:25 -- common/autotest_common.sh@10 -- # set +x 00:15:19.545 { 00:15:19.545 "subsystems": [ 00:15:19.545 { 00:15:19.545 "subsystem": "bdev", 00:15:19.545 "config": [ 00:15:19.545 { 00:15:19.545 "params": { 00:15:19.545 "trtype": "pcie", 00:15:19.545 "name": "Nvme0", 00:15:19.545 "traddr": "0000:00:06.0" 00:15:19.545 }, 00:15:19.545 "method": "bdev_nvme_attach_controller" 00:15:19.545 }, 00:15:19.545 { 00:15:19.545 "method": "bdev_wait_for_examine" 00:15:19.545 } 00:15:19.545 ] 00:15:19.545 } 00:15:19.545 ] 00:15:19.545 } 00:15:19.545 [2024-07-13 08:01:25.326086] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:19.545 [2024-07-13 08:01:25.326253] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68156 ] 00:15:19.803 [2024-07-13 08:01:25.454011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.803 [2024-07-13 08:01:25.509330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.319  Copying: 60/60 [kB] (average 58 MBps) 00:15:20.319 00:15:20.319 08:01:25 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:20.319 08:01:25 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:15:20.319 08:01:25 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:15:20.319 08:01:25 -- dd/common.sh@11 -- # local nvme_ref= 00:15:20.319 08:01:25 -- dd/common.sh@12 -- # local size=61440 00:15:20.319 08:01:25 -- dd/common.sh@14 -- # local bs=1048576 00:15:20.319 08:01:25 -- dd/common.sh@15 -- # local count=1 00:15:20.319 08:01:25 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:15:20.319 08:01:25 -- dd/common.sh@18 -- # gen_conf 00:15:20.319 08:01:25 -- dd/common.sh@31 -- # xtrace_disable 00:15:20.319 08:01:25 -- common/autotest_common.sh@10 -- # set +x 00:15:20.319 { 00:15:20.319 "subsystems": [ 00:15:20.319 { 00:15:20.319 "subsystem": "bdev", 00:15:20.319 "config": [ 00:15:20.319 { 00:15:20.319 "params": { 00:15:20.319 "trtype": "pcie", 00:15:20.319 "name": "Nvme0", 00:15:20.319 "traddr": "0000:00:06.0" 00:15:20.319 }, 00:15:20.319 "method": "bdev_nvme_attach_controller" 00:15:20.319 }, 00:15:20.319 { 00:15:20.319 "method": "bdev_wait_for_examine" 00:15:20.319 } 00:15:20.319 ] 00:15:20.319 } 00:15:20.319 ] 00:15:20.319 } 00:15:20.319 [2024-07-13 08:01:26.026084] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:20.319 [2024-07-13 08:01:26.026256] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68176 ] 00:15:20.585 [2024-07-13 08:01:26.161026] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.585 [2024-07-13 08:01:26.206301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.849  Copying: 1024/1024 [kB] (average 1000 MBps) 00:15:20.849 00:15:20.849 08:01:26 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:15:20.849 08:01:26 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:15:20.849 08:01:26 -- dd/basic_rw.sh@23 -- # count=7 00:15:20.849 08:01:26 -- dd/basic_rw.sh@24 -- # count=7 00:15:20.849 08:01:26 -- dd/basic_rw.sh@25 -- # size=57344 00:15:20.849 08:01:26 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:15:20.849 08:01:26 -- dd/common.sh@98 -- # xtrace_disable 00:15:20.849 08:01:26 -- common/autotest_common.sh@10 -- # set +x 00:15:21.414 08:01:27 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:15:21.414 08:01:27 -- dd/basic_rw.sh@30 -- # gen_conf 00:15:21.414 08:01:27 -- dd/common.sh@31 -- # xtrace_disable 00:15:21.414 08:01:27 -- common/autotest_common.sh@10 -- # set +x 00:15:21.414 { 00:15:21.414 "subsystems": [ 00:15:21.414 { 00:15:21.414 "subsystem": "bdev", 00:15:21.414 "config": [ 00:15:21.414 { 00:15:21.414 "params": { 00:15:21.414 "trtype": "pcie", 00:15:21.414 "name": "Nvme0", 00:15:21.414 "traddr": "0000:00:06.0" 00:15:21.414 }, 00:15:21.414 "method": "bdev_nvme_attach_controller" 00:15:21.414 }, 00:15:21.414 { 00:15:21.414 "method": "bdev_wait_for_examine" 00:15:21.414 } 00:15:21.414 ] 00:15:21.414 } 00:15:21.414 ] 00:15:21.414 } 00:15:21.672 [2024-07-13 08:01:27.269771] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:21.672 [2024-07-13 08:01:27.269944] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68197 ] 00:15:21.672 [2024-07-13 08:01:27.400940] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.672 [2024-07-13 08:01:27.446910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.188  Copying: 56/56 [kB] (average 54 MBps) 00:15:22.188 00:15:22.188 08:01:27 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:15:22.188 08:01:27 -- dd/basic_rw.sh@37 -- # gen_conf 00:15:22.188 08:01:27 -- dd/common.sh@31 -- # xtrace_disable 00:15:22.188 08:01:27 -- common/autotest_common.sh@10 -- # set +x 00:15:22.188 { 00:15:22.188 "subsystems": [ 00:15:22.188 { 00:15:22.188 "subsystem": "bdev", 00:15:22.188 "config": [ 00:15:22.188 { 00:15:22.188 "params": { 00:15:22.188 "trtype": "pcie", 00:15:22.189 "name": "Nvme0", 00:15:22.189 "traddr": "0000:00:06.0" 00:15:22.189 }, 00:15:22.189 "method": "bdev_nvme_attach_controller" 00:15:22.189 }, 00:15:22.189 { 00:15:22.189 "method": "bdev_wait_for_examine" 00:15:22.189 } 00:15:22.189 ] 00:15:22.189 } 00:15:22.189 ] 00:15:22.189 } 00:15:22.189 [2024-07-13 08:01:27.934516] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:22.189 [2024-07-13 08:01:27.934673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68212 ] 00:15:22.447 [2024-07-13 08:01:28.072120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.447 [2024-07-13 08:01:28.123572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.705  Copying: 56/56 [kB] (average 27 MBps) 00:15:22.705 00:15:22.705 08:01:28 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:22.705 08:01:28 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:15:22.705 08:01:28 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:15:22.705 08:01:28 -- dd/common.sh@11 -- # local nvme_ref= 00:15:22.705 08:01:28 -- dd/common.sh@12 -- # local size=57344 00:15:22.705 08:01:28 -- dd/common.sh@14 -- # local bs=1048576 00:15:22.705 08:01:28 -- dd/common.sh@15 -- # local count=1 00:15:22.705 08:01:28 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:15:22.705 08:01:28 -- dd/common.sh@18 -- # gen_conf 00:15:22.705 08:01:28 -- dd/common.sh@31 -- # xtrace_disable 00:15:22.705 08:01:28 -- common/autotest_common.sh@10 -- # set +x 00:15:22.963 { 00:15:22.963 "subsystems": [ 00:15:22.963 { 00:15:22.963 "subsystem": "bdev", 00:15:22.963 "config": [ 00:15:22.963 { 00:15:22.963 "params": { 00:15:22.963 "trtype": "pcie", 00:15:22.963 "name": "Nvme0", 00:15:22.963 "traddr": "0000:00:06.0" 00:15:22.963 }, 00:15:22.963 "method": "bdev_nvme_attach_controller" 00:15:22.963 }, 00:15:22.963 { 00:15:22.963 "method": "bdev_wait_for_examine" 00:15:22.963 } 00:15:22.963 ] 00:15:22.963 } 00:15:22.963 ] 00:15:22.963 } 00:15:22.963 [2024-07-13 08:01:28.606206] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:22.963 [2024-07-13 08:01:28.606383] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68226 ] 00:15:22.963 [2024-07-13 08:01:28.749853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.220 [2024-07-13 08:01:28.805269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.479  Copying: 1024/1024 [kB] (average 1000 MBps) 00:15:23.479 00:15:23.479 08:01:29 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:15:23.479 08:01:29 -- dd/basic_rw.sh@23 -- # count=7 00:15:23.479 08:01:29 -- dd/basic_rw.sh@24 -- # count=7 00:15:23.479 08:01:29 -- dd/basic_rw.sh@25 -- # size=57344 00:15:23.479 08:01:29 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:15:23.479 08:01:29 -- dd/common.sh@98 -- # xtrace_disable 00:15:23.479 08:01:29 -- common/autotest_common.sh@10 -- # set +x 00:15:24.046 08:01:29 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:15:24.046 08:01:29 -- dd/basic_rw.sh@30 -- # gen_conf 00:15:24.046 08:01:29 -- dd/common.sh@31 -- # xtrace_disable 00:15:24.046 08:01:29 -- common/autotest_common.sh@10 -- # set +x 00:15:24.046 { 00:15:24.046 "subsystems": [ 00:15:24.046 { 00:15:24.046 "subsystem": "bdev", 00:15:24.046 "config": [ 00:15:24.046 { 00:15:24.046 "params": { 00:15:24.046 "trtype": "pcie", 00:15:24.046 "name": "Nvme0", 00:15:24.046 "traddr": "0000:00:06.0" 00:15:24.046 }, 00:15:24.046 "method": "bdev_nvme_attach_controller" 00:15:24.046 }, 00:15:24.046 { 00:15:24.046 "method": "bdev_wait_for_examine" 00:15:24.046 } 00:15:24.046 ] 00:15:24.046 } 00:15:24.046 ] 00:15:24.046 } 00:15:24.303 [2024-07-13 08:01:29.861966] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:24.303 [2024-07-13 08:01:29.862138] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68253 ] 00:15:24.303 [2024-07-13 08:01:29.992077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.303 [2024-07-13 08:01:30.038396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.819  Copying: 56/56 [kB] (average 54 MBps) 00:15:24.819 00:15:24.819 08:01:30 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:15:24.819 08:01:30 -- dd/basic_rw.sh@37 -- # gen_conf 00:15:24.819 08:01:30 -- dd/common.sh@31 -- # xtrace_disable 00:15:24.819 08:01:30 -- common/autotest_common.sh@10 -- # set +x 00:15:24.819 { 00:15:24.819 "subsystems": [ 00:15:24.819 { 00:15:24.819 "subsystem": "bdev", 00:15:24.819 "config": [ 00:15:24.819 { 00:15:24.819 "params": { 00:15:24.819 "trtype": "pcie", 00:15:24.819 "name": "Nvme0", 00:15:24.819 "traddr": "0000:00:06.0" 00:15:24.819 }, 00:15:24.819 "method": "bdev_nvme_attach_controller" 00:15:24.819 }, 00:15:24.819 { 00:15:24.819 "method": "bdev_wait_for_examine" 00:15:24.819 } 00:15:24.819 ] 00:15:24.819 } 00:15:24.819 ] 00:15:24.819 } 00:15:24.819 [2024-07-13 08:01:30.518510] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:24.819 [2024-07-13 08:01:30.518673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68261 ] 00:15:25.078 [2024-07-13 08:01:30.657815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.078 [2024-07-13 08:01:30.704366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.337  Copying: 56/56 [kB] (average 54 MBps) 00:15:25.337 00:15:25.337 08:01:31 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:25.337 08:01:31 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:15:25.337 08:01:31 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:15:25.337 08:01:31 -- dd/common.sh@11 -- # local nvme_ref= 00:15:25.337 08:01:31 -- dd/common.sh@12 -- # local size=57344 00:15:25.337 08:01:31 -- dd/common.sh@14 -- # local bs=1048576 00:15:25.337 08:01:31 -- dd/common.sh@15 -- # local count=1 00:15:25.337 08:01:31 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:15:25.337 08:01:31 -- dd/common.sh@18 -- # gen_conf 00:15:25.337 08:01:31 -- dd/common.sh@31 -- # xtrace_disable 00:15:25.337 08:01:31 -- common/autotest_common.sh@10 -- # set +x 00:15:25.337 { 00:15:25.337 "subsystems": [ 00:15:25.337 { 00:15:25.337 "subsystem": "bdev", 00:15:25.337 "config": [ 00:15:25.337 { 00:15:25.337 "params": { 00:15:25.337 "trtype": "pcie", 00:15:25.337 "name": "Nvme0", 00:15:25.337 "traddr": "0000:00:06.0" 00:15:25.337 }, 00:15:25.337 "method": "bdev_nvme_attach_controller" 00:15:25.337 }, 00:15:25.337 { 00:15:25.337 "method": "bdev_wait_for_examine" 00:15:25.337 } 00:15:25.337 ] 00:15:25.337 } 00:15:25.337 ] 00:15:25.337 } 00:15:25.599 [2024-07-13 08:01:31.197890] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:25.599 [2024-07-13 08:01:31.198173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68282 ] 00:15:25.599 [2024-07-13 08:01:31.327880] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.599 [2024-07-13 08:01:31.374279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.118  Copying: 1024/1024 [kB] (average 500 MBps) 00:15:26.118 00:15:26.118 08:01:31 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:15:26.118 08:01:31 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:15:26.118 08:01:31 -- dd/basic_rw.sh@23 -- # count=3 00:15:26.118 08:01:31 -- dd/basic_rw.sh@24 -- # count=3 00:15:26.118 08:01:31 -- dd/basic_rw.sh@25 -- # size=49152 00:15:26.118 08:01:31 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:15:26.118 08:01:31 -- dd/common.sh@98 -- # xtrace_disable 00:15:26.118 08:01:31 -- common/autotest_common.sh@10 -- # set +x 00:15:26.377 08:01:32 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:15:26.636 08:01:32 -- dd/basic_rw.sh@30 -- # gen_conf 00:15:26.636 08:01:32 -- dd/common.sh@31 -- # xtrace_disable 00:15:26.636 08:01:32 -- common/autotest_common.sh@10 -- # set +x 00:15:26.636 { 00:15:26.636 "subsystems": [ 00:15:26.636 { 00:15:26.636 "subsystem": "bdev", 00:15:26.636 "config": [ 00:15:26.636 { 00:15:26.636 "params": { 00:15:26.636 "trtype": "pcie", 00:15:26.636 "name": "Nvme0", 00:15:26.636 "traddr": "0000:00:06.0" 00:15:26.636 }, 00:15:26.636 "method": "bdev_nvme_attach_controller" 00:15:26.636 }, 00:15:26.636 { 00:15:26.636 "method": "bdev_wait_for_examine" 00:15:26.636 } 00:15:26.636 ] 00:15:26.636 } 00:15:26.636 ] 00:15:26.636 } 00:15:26.636 [2024-07-13 08:01:32.322216] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:26.636 [2024-07-13 08:01:32.322387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68302 ] 00:15:26.895 [2024-07-13 08:01:32.452915] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.895 [2024-07-13 08:01:32.500937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.152  Copying: 48/48 [kB] (average 46 MBps) 00:15:27.153 00:15:27.153 08:01:32 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:15:27.153 08:01:32 -- dd/basic_rw.sh@37 -- # gen_conf 00:15:27.153 08:01:32 -- dd/common.sh@31 -- # xtrace_disable 00:15:27.153 08:01:32 -- common/autotest_common.sh@10 -- # set +x 00:15:27.153 { 00:15:27.153 "subsystems": [ 00:15:27.153 { 00:15:27.153 "subsystem": "bdev", 00:15:27.153 "config": [ 00:15:27.153 { 00:15:27.153 "params": { 00:15:27.153 "trtype": "pcie", 00:15:27.153 "name": "Nvme0", 00:15:27.153 "traddr": "0000:00:06.0" 00:15:27.153 }, 00:15:27.153 "method": "bdev_nvme_attach_controller" 00:15:27.153 }, 00:15:27.153 { 00:15:27.153 "method": "bdev_wait_for_examine" 00:15:27.153 } 00:15:27.153 ] 00:15:27.153 } 00:15:27.153 ] 00:15:27.153 } 00:15:27.410 [2024-07-13 08:01:32.989419] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:27.411 [2024-07-13 08:01:32.989596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68321 ] 00:15:27.411 [2024-07-13 08:01:33.120245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.411 [2024-07-13 08:01:33.185131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.927  Copying: 48/48 [kB] (average 46 MBps) 00:15:27.927 00:15:27.927 08:01:33 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:27.927 08:01:33 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:15:27.927 08:01:33 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:15:27.927 08:01:33 -- dd/common.sh@11 -- # local nvme_ref= 00:15:27.927 08:01:33 -- dd/common.sh@12 -- # local size=49152 00:15:27.927 08:01:33 -- dd/common.sh@14 -- # local bs=1048576 00:15:27.927 08:01:33 -- dd/common.sh@15 -- # local count=1 00:15:27.927 08:01:33 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:15:27.927 08:01:33 -- dd/common.sh@18 -- # gen_conf 00:15:27.927 08:01:33 -- dd/common.sh@31 -- # xtrace_disable 00:15:27.927 08:01:33 -- common/autotest_common.sh@10 -- # set +x 00:15:27.927 { 00:15:27.927 "subsystems": [ 00:15:27.927 { 00:15:27.927 "subsystem": "bdev", 00:15:27.927 "config": [ 00:15:27.927 { 00:15:27.927 "params": { 00:15:27.927 "trtype": "pcie", 00:15:27.927 "name": "Nvme0", 00:15:27.927 "traddr": "0000:00:06.0" 00:15:27.927 }, 00:15:27.927 "method": "bdev_nvme_attach_controller" 00:15:27.927 }, 00:15:27.927 { 00:15:27.927 "method": "bdev_wait_for_examine" 00:15:27.927 } 00:15:27.927 ] 00:15:27.927 } 00:15:27.927 ] 00:15:27.927 } 00:15:27.927 [2024-07-13 08:01:33.678753] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:27.927 [2024-07-13 08:01:33.678923] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68331 ] 00:15:28.187 [2024-07-13 08:01:33.808603] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.187 [2024-07-13 08:01:33.854207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.447  Copying: 1024/1024 [kB] (average 1000 MBps) 00:15:28.447 00:15:28.447 08:01:34 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:15:28.447 08:01:34 -- dd/basic_rw.sh@23 -- # count=3 00:15:28.447 08:01:34 -- dd/basic_rw.sh@24 -- # count=3 00:15:28.447 08:01:34 -- dd/basic_rw.sh@25 -- # size=49152 00:15:28.447 08:01:34 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:15:28.447 08:01:34 -- dd/common.sh@98 -- # xtrace_disable 00:15:28.447 08:01:34 -- common/autotest_common.sh@10 -- # set +x 00:15:29.014 08:01:34 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:15:29.014 08:01:34 -- dd/basic_rw.sh@30 -- # gen_conf 00:15:29.014 08:01:34 -- dd/common.sh@31 -- # xtrace_disable 00:15:29.014 08:01:34 -- common/autotest_common.sh@10 -- # set +x 00:15:29.014 { 00:15:29.014 "subsystems": [ 00:15:29.014 { 00:15:29.014 "subsystem": "bdev", 00:15:29.014 "config": [ 00:15:29.014 { 00:15:29.014 "params": { 00:15:29.014 "trtype": "pcie", 00:15:29.014 "name": "Nvme0", 00:15:29.014 "traddr": "0000:00:06.0" 00:15:29.014 }, 00:15:29.014 "method": "bdev_nvme_attach_controller" 00:15:29.014 }, 00:15:29.014 { 00:15:29.014 "method": "bdev_wait_for_examine" 00:15:29.014 } 00:15:29.014 ] 00:15:29.014 } 00:15:29.014 ] 00:15:29.014 } 00:15:29.014 [2024-07-13 08:01:34.818695] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:29.014 [2024-07-13 08:01:34.818872] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68363 ] 00:15:29.273 [2024-07-13 08:01:34.948521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.273 [2024-07-13 08:01:34.993881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.532  Copying: 48/48 [kB] (average 46 MBps) 00:15:29.532 00:15:29.532 08:01:35 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:15:29.790 08:01:35 -- dd/basic_rw.sh@37 -- # gen_conf 00:15:29.790 08:01:35 -- dd/common.sh@31 -- # xtrace_disable 00:15:29.790 08:01:35 -- common/autotest_common.sh@10 -- # set +x 00:15:29.790 { 00:15:29.790 "subsystems": [ 00:15:29.790 { 00:15:29.790 "subsystem": "bdev", 00:15:29.790 "config": [ 00:15:29.790 { 00:15:29.790 "params": { 00:15:29.790 "trtype": "pcie", 00:15:29.790 "name": "Nvme0", 00:15:29.790 "traddr": "0000:00:06.0" 00:15:29.790 }, 00:15:29.790 "method": "bdev_nvme_attach_controller" 00:15:29.790 }, 00:15:29.790 { 00:15:29.790 "method": "bdev_wait_for_examine" 00:15:29.790 } 00:15:29.790 ] 00:15:29.790 } 00:15:29.790 ] 00:15:29.790 } 00:15:29.790 [2024-07-13 08:01:35.476316] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:29.790 [2024-07-13 08:01:35.476521] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68376 ] 00:15:30.048 [2024-07-13 08:01:35.606824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.048 [2024-07-13 08:01:35.655992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.307  Copying: 48/48 [kB] (average 46 MBps) 00:15:30.307 00:15:30.307 08:01:36 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:30.307 08:01:36 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:15:30.307 08:01:36 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:15:30.307 08:01:36 -- dd/common.sh@11 -- # local nvme_ref= 00:15:30.307 08:01:36 -- dd/common.sh@12 -- # local size=49152 00:15:30.307 08:01:36 -- dd/common.sh@14 -- # local bs=1048576 00:15:30.307 08:01:36 -- dd/common.sh@15 -- # local count=1 00:15:30.307 08:01:36 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:15:30.307 08:01:36 -- dd/common.sh@18 -- # gen_conf 00:15:30.307 08:01:36 -- dd/common.sh@31 -- # xtrace_disable 00:15:30.307 08:01:36 -- common/autotest_common.sh@10 -- # set +x 00:15:30.307 { 00:15:30.307 "subsystems": [ 00:15:30.307 { 00:15:30.307 "subsystem": "bdev", 00:15:30.307 "config": [ 00:15:30.307 { 00:15:30.307 "params": { 00:15:30.307 "trtype": "pcie", 00:15:30.307 "name": "Nvme0", 00:15:30.307 "traddr": "0000:00:06.0" 00:15:30.307 }, 00:15:30.307 "method": "bdev_nvme_attach_controller" 00:15:30.307 }, 00:15:30.307 { 00:15:30.307 "method": "bdev_wait_for_examine" 00:15:30.307 } 00:15:30.307 ] 00:15:30.307 } 00:15:30.307 ] 00:15:30.307 } 00:15:30.566 [2024-07-13 08:01:36.142640] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:30.566 [2024-07-13 08:01:36.142827] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68392 ] 00:15:30.566 [2024-07-13 08:01:36.273299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.566 [2024-07-13 08:01:36.319961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.083  Copying: 1024/1024 [kB] (average 1000 MBps) 00:15:31.083 00:15:31.083 00:15:31.083 real 0m15.358s 00:15:31.083 user 0m9.218s 00:15:31.083 sys 0m3.637s 00:15:31.083 ************************************ 00:15:31.083 END TEST dd_rw 00:15:31.083 ************************************ 00:15:31.083 08:01:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:31.083 08:01:36 -- common/autotest_common.sh@10 -- # set +x 00:15:31.083 08:01:36 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:15:31.083 08:01:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:31.083 08:01:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:31.083 08:01:36 -- common/autotest_common.sh@10 -- # set +x 00:15:31.083 ************************************ 00:15:31.083 START TEST dd_rw_offset 00:15:31.083 ************************************ 00:15:31.083 08:01:36 -- common/autotest_common.sh@1104 -- # basic_offset 00:15:31.083 08:01:36 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:15:31.083 08:01:36 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:15:31.083 08:01:36 -- dd/common.sh@98 -- # xtrace_disable 00:15:31.083 08:01:36 -- common/autotest_common.sh@10 -- # set +x 00:15:31.083 08:01:36 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:15:31.084 08:01:36 -- dd/basic_rw.sh@56 -- # data=0s0vsfb4gfwn01peort45oioj9va9kgrtzgkqmz4n3u7blu4b7ykuiomo4i0cm8eyck8hofs72mr4l7aorexwrz9jxkqwyv89v4vgayw6gxcc3tf53ujkvk4pcacjcevl82rwtk3izxjchu7uv1r181tmj82pj9kfagt09drzktpl80u1uk89sywttrx4v6ttrft48t2x7xcy8fis9hktb5p0ulfazzs3fmqaox1sd8v1a9g2xnn18a06xir3ebmafeyaaaf8r27gzd839xi1ttbrsqx9812o1slxzc6g0bqpuynoffht1vrj4jp7t63s4h8e73htwev2te9tcquz9lff6lb71rs1xjxpcqr8l2673pnyrysimp84uwl1pc7b2jkaktufqwo246vygtjmqv5ytbb4fo7c00ivs62r37qwwzf42oup3st4pa76e7n3yzogutjxp88uxnitu53e578sj9rhdr9udtmmtvpuo0v85dwo74bhpgh8elmtvlq8gqdeduyuf820lj9aub2cfpllerwbcu7urqvhf15vq88f3qurkv04cjmvgjda9n022hmcs9melttrl0nw5erhf12wwsap231e1362l7z06kdktjmgx8hhlrci7ywj7g0k4zkbi8ek3wi64gc3ljzwtsgy0iqi4khbjfgm5xbxvs9f5s4nemd6616ygpghcw5ct4ugd6jizy9bd1nlwk74f64yaayzq7tj6bsudjo0h4gspyhrp6mq20qyvqvjg57bd425vbzvppnck46aaubh3xm828tfjuu5q71v5v21lf3a9d5ucw0nkfn19zrvxyb05quktw2yyn18ckb7bhtfijgl3i7mc1wjiuejxid8gec84ksmjk7uu2x2gyftb5nicqc5d5kdmt7mrgkixki26ie6ad1up4m3ylih12970ol1mbnrou2pn3cp1dvs13ud72ntoxptlro27j38blu3erm01p493r5htz1dol271nymes1g426462g0rag0qz5grfm3haky3vuhtw2wona3km3nad1j0tyrdt96gky6ii07oa522btchnvmdxc5plzsh8if6ujuzmi0wem542aah7zwin1cx888b5r4mcr9at5l655ebamuwkbroj9yyd5ur1hjysowoubucm71j1fpar0vo2971dqxznql7ke46nj4udmun0qfl89cp7e5lieyjl38qlp1l2y3d2x092sc7c9h2n08wm509bo7bz7au9hzrwdkkqw9orhpyhjkwsrh2gfu2oddk8k2ycfec1f9rfd3ykg176428w950ylz8tso6jc8zs6jnpfnnqonyzewyfyt3kezmr5wfqelytjokhagzxhf9wa8azpcye7zck4uwt4j9dfjpqjrz6d937z4dzz63ft2m21fjchz2624r44yzh7a1fdbmzyr3weu5xfkhyhcmddhzjzcrm5gpgrv9sr6bz06ek201pu7wg0mpjtsaahi403m45n6lqbb6dqh986g0xwecak6cp7oy2ubbsbjy28llygx6kn07f6vug6psqrp7ukd7to56r2us88am5lucxqmz39fznmvh3zvvu0o2zbg1tfx53z7c9fj5qvpbhzv2hcs8ks7m0xholoe2zjisnfddbtdkvuy815g93r7wr3y9cklddl6c62laxg6tqywxvve2fot7v3izphf2bwriwd1dupdgstmdkp0282oq9i5vnbm3j8hu3kq67m8dhr6ho5fosu479w96lw6jmyu7lkfr84e842bgff1z2m283v11jaw36knwz8m6osbp78wldsgh8v7jjh2ow25gse90i0j1jzuddvubkc1m7wyncaoqtcr6gvvs4y6ebwozaojao1izw9l3srncyg11zfughrz8lr5hp7kuypqrrwhndh25c72rhqyv1jxutgw622bms0mx13opqsx5vuiuvpmt5o388tc0x97paz4v8a59sx8nussz0hvp70jhbpphnsazob2bcxuh27z9pbc79y1pghw3umcd3qvv8ftf4rb28uuddoja4xxg07lct4czde7kqldcu2y51kgg9wkqy8fg29dppm4g1a0cmkp0yerz7w5opiuhqoqc9mlcy0zffiftudgd4gxd8n2654betncb0wxj1fwmjrg5qfb7jnrbq8fj6cwj9yxpmzy18c5vkya56hjqswlo40hie4c6wpnihvoori5jrhplavb2my1vb8kb5gj3yc061xt2pzviget80kwbvketi9sbtr4w3wa85qg0aag86tfjxid6rsv952r3n8enxbk7qk2xee1ywumxw1pbzib3qnuf18wy6zq4kr3lyssymoepjqyebxzxhar6joq1t14bklki31tntef10ds9mbssofjoqt13f6hke2xt3wgsnfytzs1ffl5s0y1ortj846lsuyxidc33y080riq1zkjgtuyoj0sdbdd4szsd2bqiolx47jio94bnfhho7tf6y8a6b2knzjmjwfk1h2duc2o3mci0pd27fynsoqhs4m84zgzgxgmtij67zf29fu59ypq6erg489i91bc9hac7earsmqr9fbpab1bmdij5pex2b97kxda2ydtx74cvy6feedjxfpbwo773dtoelx8mz1jisn9yako6gi8rva7g0tsl4hnoo0onowir8grctmj65gctqdobgsnhyhny3o028u65xa3h3w4r09k9qb3y7ht43ski4n7xivdxc88yujzotb6w2nxmtif750y2pb4pbq7d00ihcz0dnob8kzv31ye7z0djx6uaiu9cc9gz9vvb20uel0ee68fjp1p33tw4xf282tchdh56ogij8svnly4yriu1eao7rkqdnihxq8jdvjvemfur6c42vmo16cqh2dp9ymldgkeo1r1nyl5e958znwxm3afuic92mz27o8vpqk4jb2cuphs3untqj7lx0e1vje214swskge4yiq3po6v7e9j28349xq3p0u9ja2wve3t6m3h4i60otk3bho3m4pdriivcr189vrd2rylctlakijrv0ci29hl2oe8h9ftw52e0yo838pb3htuiyf41pjo8173uoo55djkhj5iljzhqotr3o38j604ezus4ugxf09z73fvdm4qqgvv84q3ew8ftlljxbxborqizjowyp97b77xd0bfy1m0o8dn1tlwlrfb734zlkmh6b4efumc7afid6zy39uuy3gwye9s0p3o7gw38xik3musoo0p1b0fnw1wftm8fvnvmri0h529vqi0a8vudbt2plzsuikpbv1r62jcku9gqpnsd6ekrv8yu3esjvf10yzavomuhlhlipb4xtl7ln3ypvbigh9wpm68mxzdwsgtbznuz1k3cj5jj8cw9s613pmv3m51fb54xbm9h521vjvptnfj3v8idevr6nnbkd4n93ku0wzl7kk100ei7a0v0grljqqasb1gzlpapuht02o4of1cylgzvo7exiceuejcfpshg9dwgtu5z5z080d4lvt0k5jjon91n8tzv52lr1ppyzrqqi2p19ucis6k6setvrjadzixtjtcjtdqitn6eaj4orl4xjd903pnf4npppa9o2yziluiwxf3kbh5wnymvj56m5xde5o75vqilv3nebn9aj9kbdjv9huq1ylzlk696stj6ebng89h44h3jny95f5ot440g7h2lo9lq18mutvz8fhd1694ta5o4l3kbnvfdc2wrcxeaeig9n6204ozuexvi292nicqd1d38o7j38ga1fv92yi6fqqmc97fmfrug7drmxofr55ghb60c0srvit14x2pcqulhxa6zlcetrylk2traq417gbczm6ivxo8gcryvhi24xgephiowgdutfce3hjv7tqcxk7e6y84z42t6w3xdin1tc4ylxhsvdj102tllj7iq019x7hs0ngwwjeoaikvsobc1pbqid6zrrczhlv56b7z4pp05kug09j8hf0kh8kpypn3pqpxgmgeehkiou6bg452wpavxs4grr39ksz9cjy7afv68g159l3gjb34ji7mndyknvpgyu 00:15:31.084 08:01:36 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:15:31.084 08:01:36 -- dd/basic_rw.sh@59 -- # gen_conf 00:15:31.084 08:01:36 -- dd/common.sh@31 -- # xtrace_disable 00:15:31.084 08:01:36 -- common/autotest_common.sh@10 -- # set +x 00:15:31.084 { 00:15:31.084 "subsystems": [ 00:15:31.084 { 00:15:31.084 "subsystem": "bdev", 00:15:31.084 "config": [ 00:15:31.084 { 00:15:31.084 "params": { 00:15:31.084 "trtype": "pcie", 00:15:31.084 "name": "Nvme0", 00:15:31.084 "traddr": "0000:00:06.0" 00:15:31.084 }, 00:15:31.084 "method": "bdev_nvme_attach_controller" 00:15:31.084 }, 00:15:31.084 { 00:15:31.084 "method": "bdev_wait_for_examine" 00:15:31.084 } 00:15:31.084 ] 00:15:31.084 } 00:15:31.084 ] 00:15:31.084 } 00:15:31.342 [2024-07-13 08:01:36.916846] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:31.342 [2024-07-13 08:01:36.917057] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68432 ] 00:15:31.342 [2024-07-13 08:01:37.052166] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.342 [2024-07-13 08:01:37.099540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.860  Copying: 4096/4096 [B] (average 4000 kBps) 00:15:31.860 00:15:31.860 08:01:37 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:15:31.860 08:01:37 -- dd/basic_rw.sh@65 -- # gen_conf 00:15:31.860 08:01:37 -- dd/common.sh@31 -- # xtrace_disable 00:15:31.860 08:01:37 -- common/autotest_common.sh@10 -- # set +x 00:15:31.860 { 00:15:31.860 "subsystems": [ 00:15:31.860 { 00:15:31.860 "subsystem": "bdev", 00:15:31.860 "config": [ 00:15:31.860 { 00:15:31.860 "params": { 00:15:31.860 "trtype": "pcie", 00:15:31.860 "name": "Nvme0", 00:15:31.860 "traddr": "0000:00:06.0" 00:15:31.860 }, 00:15:31.860 "method": "bdev_nvme_attach_controller" 00:15:31.860 }, 00:15:31.860 { 00:15:31.860 "method": "bdev_wait_for_examine" 00:15:31.860 } 00:15:31.860 ] 00:15:31.860 } 00:15:31.860 ] 00:15:31.860 } 00:15:31.860 [2024-07-13 08:01:37.580958] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:31.860 [2024-07-13 08:01:37.581129] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68446 ] 00:15:32.125 [2024-07-13 08:01:37.710879] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.125 [2024-07-13 08:01:37.761262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.385  Copying: 4096/4096 [B] (average 4000 kBps) 00:15:32.385 00:15:32.385 08:01:38 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:15:32.386 08:01:38 -- dd/basic_rw.sh@72 -- # [[ 0s0vsfb4gfwn01peort45oioj9va9kgrtzgkqmz4n3u7blu4b7ykuiomo4i0cm8eyck8hofs72mr4l7aorexwrz9jxkqwyv89v4vgayw6gxcc3tf53ujkvk4pcacjcevl82rwtk3izxjchu7uv1r181tmj82pj9kfagt09drzktpl80u1uk89sywttrx4v6ttrft48t2x7xcy8fis9hktb5p0ulfazzs3fmqaox1sd8v1a9g2xnn18a06xir3ebmafeyaaaf8r27gzd839xi1ttbrsqx9812o1slxzc6g0bqpuynoffht1vrj4jp7t63s4h8e73htwev2te9tcquz9lff6lb71rs1xjxpcqr8l2673pnyrysimp84uwl1pc7b2jkaktufqwo246vygtjmqv5ytbb4fo7c00ivs62r37qwwzf42oup3st4pa76e7n3yzogutjxp88uxnitu53e578sj9rhdr9udtmmtvpuo0v85dwo74bhpgh8elmtvlq8gqdeduyuf820lj9aub2cfpllerwbcu7urqvhf15vq88f3qurkv04cjmvgjda9n022hmcs9melttrl0nw5erhf12wwsap231e1362l7z06kdktjmgx8hhlrci7ywj7g0k4zkbi8ek3wi64gc3ljzwtsgy0iqi4khbjfgm5xbxvs9f5s4nemd6616ygpghcw5ct4ugd6jizy9bd1nlwk74f64yaayzq7tj6bsudjo0h4gspyhrp6mq20qyvqvjg57bd425vbzvppnck46aaubh3xm828tfjuu5q71v5v21lf3a9d5ucw0nkfn19zrvxyb05quktw2yyn18ckb7bhtfijgl3i7mc1wjiuejxid8gec84ksmjk7uu2x2gyftb5nicqc5d5kdmt7mrgkixki26ie6ad1up4m3ylih12970ol1mbnrou2pn3cp1dvs13ud72ntoxptlro27j38blu3erm01p493r5htz1dol271nymes1g426462g0rag0qz5grfm3haky3vuhtw2wona3km3nad1j0tyrdt96gky6ii07oa522btchnvmdxc5plzsh8if6ujuzmi0wem542aah7zwin1cx888b5r4mcr9at5l655ebamuwkbroj9yyd5ur1hjysowoubucm71j1fpar0vo2971dqxznql7ke46nj4udmun0qfl89cp7e5lieyjl38qlp1l2y3d2x092sc7c9h2n08wm509bo7bz7au9hzrwdkkqw9orhpyhjkwsrh2gfu2oddk8k2ycfec1f9rfd3ykg176428w950ylz8tso6jc8zs6jnpfnnqonyzewyfyt3kezmr5wfqelytjokhagzxhf9wa8azpcye7zck4uwt4j9dfjpqjrz6d937z4dzz63ft2m21fjchz2624r44yzh7a1fdbmzyr3weu5xfkhyhcmddhzjzcrm5gpgrv9sr6bz06ek201pu7wg0mpjtsaahi403m45n6lqbb6dqh986g0xwecak6cp7oy2ubbsbjy28llygx6kn07f6vug6psqrp7ukd7to56r2us88am5lucxqmz39fznmvh3zvvu0o2zbg1tfx53z7c9fj5qvpbhzv2hcs8ks7m0xholoe2zjisnfddbtdkvuy815g93r7wr3y9cklddl6c62laxg6tqywxvve2fot7v3izphf2bwriwd1dupdgstmdkp0282oq9i5vnbm3j8hu3kq67m8dhr6ho5fosu479w96lw6jmyu7lkfr84e842bgff1z2m283v11jaw36knwz8m6osbp78wldsgh8v7jjh2ow25gse90i0j1jzuddvubkc1m7wyncaoqtcr6gvvs4y6ebwozaojao1izw9l3srncyg11zfughrz8lr5hp7kuypqrrwhndh25c72rhqyv1jxutgw622bms0mx13opqsx5vuiuvpmt5o388tc0x97paz4v8a59sx8nussz0hvp70jhbpphnsazob2bcxuh27z9pbc79y1pghw3umcd3qvv8ftf4rb28uuddoja4xxg07lct4czde7kqldcu2y51kgg9wkqy8fg29dppm4g1a0cmkp0yerz7w5opiuhqoqc9mlcy0zffiftudgd4gxd8n2654betncb0wxj1fwmjrg5qfb7jnrbq8fj6cwj9yxpmzy18c5vkya56hjqswlo40hie4c6wpnihvoori5jrhplavb2my1vb8kb5gj3yc061xt2pzviget80kwbvketi9sbtr4w3wa85qg0aag86tfjxid6rsv952r3n8enxbk7qk2xee1ywumxw1pbzib3qnuf18wy6zq4kr3lyssymoepjqyebxzxhar6joq1t14bklki31tntef10ds9mbssofjoqt13f6hke2xt3wgsnfytzs1ffl5s0y1ortj846lsuyxidc33y080riq1zkjgtuyoj0sdbdd4szsd2bqiolx47jio94bnfhho7tf6y8a6b2knzjmjwfk1h2duc2o3mci0pd27fynsoqhs4m84zgzgxgmtij67zf29fu59ypq6erg489i91bc9hac7earsmqr9fbpab1bmdij5pex2b97kxda2ydtx74cvy6feedjxfpbwo773dtoelx8mz1jisn9yako6gi8rva7g0tsl4hnoo0onowir8grctmj65gctqdobgsnhyhny3o028u65xa3h3w4r09k9qb3y7ht43ski4n7xivdxc88yujzotb6w2nxmtif750y2pb4pbq7d00ihcz0dnob8kzv31ye7z0djx6uaiu9cc9gz9vvb20uel0ee68fjp1p33tw4xf282tchdh56ogij8svnly4yriu1eao7rkqdnihxq8jdvjvemfur6c42vmo16cqh2dp9ymldgkeo1r1nyl5e958znwxm3afuic92mz27o8vpqk4jb2cuphs3untqj7lx0e1vje214swskge4yiq3po6v7e9j28349xq3p0u9ja2wve3t6m3h4i60otk3bho3m4pdriivcr189vrd2rylctlakijrv0ci29hl2oe8h9ftw52e0yo838pb3htuiyf41pjo8173uoo55djkhj5iljzhqotr3o38j604ezus4ugxf09z73fvdm4qqgvv84q3ew8ftlljxbxborqizjowyp97b77xd0bfy1m0o8dn1tlwlrfb734zlkmh6b4efumc7afid6zy39uuy3gwye9s0p3o7gw38xik3musoo0p1b0fnw1wftm8fvnvmri0h529vqi0a8vudbt2plzsuikpbv1r62jcku9gqpnsd6ekrv8yu3esjvf10yzavomuhlhlipb4xtl7ln3ypvbigh9wpm68mxzdwsgtbznuz1k3cj5jj8cw9s613pmv3m51fb54xbm9h521vjvptnfj3v8idevr6nnbkd4n93ku0wzl7kk100ei7a0v0grljqqasb1gzlpapuht02o4of1cylgzvo7exiceuejcfpshg9dwgtu5z5z080d4lvt0k5jjon91n8tzv52lr1ppyzrqqi2p19ucis6k6setvrjadzixtjtcjtdqitn6eaj4orl4xjd903pnf4npppa9o2yziluiwxf3kbh5wnymvj56m5xde5o75vqilv3nebn9aj9kbdjv9huq1ylzlk696stj6ebng89h44h3jny95f5ot440g7h2lo9lq18mutvz8fhd1694ta5o4l3kbnvfdc2wrcxeaeig9n6204ozuexvi292nicqd1d38o7j38ga1fv92yi6fqqmc97fmfrug7drmxofr55ghb60c0srvit14x2pcqulhxa6zlcetrylk2traq417gbczm6ivxo8gcryvhi24xgephiowgdutfce3hjv7tqcxk7e6y84z42t6w3xdin1tc4ylxhsvdj102tllj7iq019x7hs0ngwwjeoaikvsobc1pbqid6zrrczhlv56b7z4pp05kug09j8hf0kh8kpypn3pqpxgmgeehkiou6bg452wpavxs4grr39ksz9cjy7afv68g159l3gjb34ji7mndyknvpgyu == \0\s\0\v\s\f\b\4\g\f\w\n\0\1\p\e\o\r\t\4\5\o\i\o\j\9\v\a\9\k\g\r\t\z\g\k\q\m\z\4\n\3\u\7\b\l\u\4\b\7\y\k\u\i\o\m\o\4\i\0\c\m\8\e\y\c\k\8\h\o\f\s\7\2\m\r\4\l\7\a\o\r\e\x\w\r\z\9\j\x\k\q\w\y\v\8\9\v\4\v\g\a\y\w\6\g\x\c\c\3\t\f\5\3\u\j\k\v\k\4\p\c\a\c\j\c\e\v\l\8\2\r\w\t\k\3\i\z\x\j\c\h\u\7\u\v\1\r\1\8\1\t\m\j\8\2\p\j\9\k\f\a\g\t\0\9\d\r\z\k\t\p\l\8\0\u\1\u\k\8\9\s\y\w\t\t\r\x\4\v\6\t\t\r\f\t\4\8\t\2\x\7\x\c\y\8\f\i\s\9\h\k\t\b\5\p\0\u\l\f\a\z\z\s\3\f\m\q\a\o\x\1\s\d\8\v\1\a\9\g\2\x\n\n\1\8\a\0\6\x\i\r\3\e\b\m\a\f\e\y\a\a\a\f\8\r\2\7\g\z\d\8\3\9\x\i\1\t\t\b\r\s\q\x\9\8\1\2\o\1\s\l\x\z\c\6\g\0\b\q\p\u\y\n\o\f\f\h\t\1\v\r\j\4\j\p\7\t\6\3\s\4\h\8\e\7\3\h\t\w\e\v\2\t\e\9\t\c\q\u\z\9\l\f\f\6\l\b\7\1\r\s\1\x\j\x\p\c\q\r\8\l\2\6\7\3\p\n\y\r\y\s\i\m\p\8\4\u\w\l\1\p\c\7\b\2\j\k\a\k\t\u\f\q\w\o\2\4\6\v\y\g\t\j\m\q\v\5\y\t\b\b\4\f\o\7\c\0\0\i\v\s\6\2\r\3\7\q\w\w\z\f\4\2\o\u\p\3\s\t\4\p\a\7\6\e\7\n\3\y\z\o\g\u\t\j\x\p\8\8\u\x\n\i\t\u\5\3\e\5\7\8\s\j\9\r\h\d\r\9\u\d\t\m\m\t\v\p\u\o\0\v\8\5\d\w\o\7\4\b\h\p\g\h\8\e\l\m\t\v\l\q\8\g\q\d\e\d\u\y\u\f\8\2\0\l\j\9\a\u\b\2\c\f\p\l\l\e\r\w\b\c\u\7\u\r\q\v\h\f\1\5\v\q\8\8\f\3\q\u\r\k\v\0\4\c\j\m\v\g\j\d\a\9\n\0\2\2\h\m\c\s\9\m\e\l\t\t\r\l\0\n\w\5\e\r\h\f\1\2\w\w\s\a\p\2\3\1\e\1\3\6\2\l\7\z\0\6\k\d\k\t\j\m\g\x\8\h\h\l\r\c\i\7\y\w\j\7\g\0\k\4\z\k\b\i\8\e\k\3\w\i\6\4\g\c\3\l\j\z\w\t\s\g\y\0\i\q\i\4\k\h\b\j\f\g\m\5\x\b\x\v\s\9\f\5\s\4\n\e\m\d\6\6\1\6\y\g\p\g\h\c\w\5\c\t\4\u\g\d\6\j\i\z\y\9\b\d\1\n\l\w\k\7\4\f\6\4\y\a\a\y\z\q\7\t\j\6\b\s\u\d\j\o\0\h\4\g\s\p\y\h\r\p\6\m\q\2\0\q\y\v\q\v\j\g\5\7\b\d\4\2\5\v\b\z\v\p\p\n\c\k\4\6\a\a\u\b\h\3\x\m\8\2\8\t\f\j\u\u\5\q\7\1\v\5\v\2\1\l\f\3\a\9\d\5\u\c\w\0\n\k\f\n\1\9\z\r\v\x\y\b\0\5\q\u\k\t\w\2\y\y\n\1\8\c\k\b\7\b\h\t\f\i\j\g\l\3\i\7\m\c\1\w\j\i\u\e\j\x\i\d\8\g\e\c\8\4\k\s\m\j\k\7\u\u\2\x\2\g\y\f\t\b\5\n\i\c\q\c\5\d\5\k\d\m\t\7\m\r\g\k\i\x\k\i\2\6\i\e\6\a\d\1\u\p\4\m\3\y\l\i\h\1\2\9\7\0\o\l\1\m\b\n\r\o\u\2\p\n\3\c\p\1\d\v\s\1\3\u\d\7\2\n\t\o\x\p\t\l\r\o\2\7\j\3\8\b\l\u\3\e\r\m\0\1\p\4\9\3\r\5\h\t\z\1\d\o\l\2\7\1\n\y\m\e\s\1\g\4\2\6\4\6\2\g\0\r\a\g\0\q\z\5\g\r\f\m\3\h\a\k\y\3\v\u\h\t\w\2\w\o\n\a\3\k\m\3\n\a\d\1\j\0\t\y\r\d\t\9\6\g\k\y\6\i\i\0\7\o\a\5\2\2\b\t\c\h\n\v\m\d\x\c\5\p\l\z\s\h\8\i\f\6\u\j\u\z\m\i\0\w\e\m\5\4\2\a\a\h\7\z\w\i\n\1\c\x\8\8\8\b\5\r\4\m\c\r\9\a\t\5\l\6\5\5\e\b\a\m\u\w\k\b\r\o\j\9\y\y\d\5\u\r\1\h\j\y\s\o\w\o\u\b\u\c\m\7\1\j\1\f\p\a\r\0\v\o\2\9\7\1\d\q\x\z\n\q\l\7\k\e\4\6\n\j\4\u\d\m\u\n\0\q\f\l\8\9\c\p\7\e\5\l\i\e\y\j\l\3\8\q\l\p\1\l\2\y\3\d\2\x\0\9\2\s\c\7\c\9\h\2\n\0\8\w\m\5\0\9\b\o\7\b\z\7\a\u\9\h\z\r\w\d\k\k\q\w\9\o\r\h\p\y\h\j\k\w\s\r\h\2\g\f\u\2\o\d\d\k\8\k\2\y\c\f\e\c\1\f\9\r\f\d\3\y\k\g\1\7\6\4\2\8\w\9\5\0\y\l\z\8\t\s\o\6\j\c\8\z\s\6\j\n\p\f\n\n\q\o\n\y\z\e\w\y\f\y\t\3\k\e\z\m\r\5\w\f\q\e\l\y\t\j\o\k\h\a\g\z\x\h\f\9\w\a\8\a\z\p\c\y\e\7\z\c\k\4\u\w\t\4\j\9\d\f\j\p\q\j\r\z\6\d\9\3\7\z\4\d\z\z\6\3\f\t\2\m\2\1\f\j\c\h\z\2\6\2\4\r\4\4\y\z\h\7\a\1\f\d\b\m\z\y\r\3\w\e\u\5\x\f\k\h\y\h\c\m\d\d\h\z\j\z\c\r\m\5\g\p\g\r\v\9\s\r\6\b\z\0\6\e\k\2\0\1\p\u\7\w\g\0\m\p\j\t\s\a\a\h\i\4\0\3\m\4\5\n\6\l\q\b\b\6\d\q\h\9\8\6\g\0\x\w\e\c\a\k\6\c\p\7\o\y\2\u\b\b\s\b\j\y\2\8\l\l\y\g\x\6\k\n\0\7\f\6\v\u\g\6\p\s\q\r\p\7\u\k\d\7\t\o\5\6\r\2\u\s\8\8\a\m\5\l\u\c\x\q\m\z\3\9\f\z\n\m\v\h\3\z\v\v\u\0\o\2\z\b\g\1\t\f\x\5\3\z\7\c\9\f\j\5\q\v\p\b\h\z\v\2\h\c\s\8\k\s\7\m\0\x\h\o\l\o\e\2\z\j\i\s\n\f\d\d\b\t\d\k\v\u\y\8\1\5\g\9\3\r\7\w\r\3\y\9\c\k\l\d\d\l\6\c\6\2\l\a\x\g\6\t\q\y\w\x\v\v\e\2\f\o\t\7\v\3\i\z\p\h\f\2\b\w\r\i\w\d\1\d\u\p\d\g\s\t\m\d\k\p\0\2\8\2\o\q\9\i\5\v\n\b\m\3\j\8\h\u\3\k\q\6\7\m\8\d\h\r\6\h\o\5\f\o\s\u\4\7\9\w\9\6\l\w\6\j\m\y\u\7\l\k\f\r\8\4\e\8\4\2\b\g\f\f\1\z\2\m\2\8\3\v\1\1\j\a\w\3\6\k\n\w\z\8\m\6\o\s\b\p\7\8\w\l\d\s\g\h\8\v\7\j\j\h\2\o\w\2\5\g\s\e\9\0\i\0\j\1\j\z\u\d\d\v\u\b\k\c\1\m\7\w\y\n\c\a\o\q\t\c\r\6\g\v\v\s\4\y\6\e\b\w\o\z\a\o\j\a\o\1\i\z\w\9\l\3\s\r\n\c\y\g\1\1\z\f\u\g\h\r\z\8\l\r\5\h\p\7\k\u\y\p\q\r\r\w\h\n\d\h\2\5\c\7\2\r\h\q\y\v\1\j\x\u\t\g\w\6\2\2\b\m\s\0\m\x\1\3\o\p\q\s\x\5\v\u\i\u\v\p\m\t\5\o\3\8\8\t\c\0\x\9\7\p\a\z\4\v\8\a\5\9\s\x\8\n\u\s\s\z\0\h\v\p\7\0\j\h\b\p\p\h\n\s\a\z\o\b\2\b\c\x\u\h\2\7\z\9\p\b\c\7\9\y\1\p\g\h\w\3\u\m\c\d\3\q\v\v\8\f\t\f\4\r\b\2\8\u\u\d\d\o\j\a\4\x\x\g\0\7\l\c\t\4\c\z\d\e\7\k\q\l\d\c\u\2\y\5\1\k\g\g\9\w\k\q\y\8\f\g\2\9\d\p\p\m\4\g\1\a\0\c\m\k\p\0\y\e\r\z\7\w\5\o\p\i\u\h\q\o\q\c\9\m\l\c\y\0\z\f\f\i\f\t\u\d\g\d\4\g\x\d\8\n\2\6\5\4\b\e\t\n\c\b\0\w\x\j\1\f\w\m\j\r\g\5\q\f\b\7\j\n\r\b\q\8\f\j\6\c\w\j\9\y\x\p\m\z\y\1\8\c\5\v\k\y\a\5\6\h\j\q\s\w\l\o\4\0\h\i\e\4\c\6\w\p\n\i\h\v\o\o\r\i\5\j\r\h\p\l\a\v\b\2\m\y\1\v\b\8\k\b\5\g\j\3\y\c\0\6\1\x\t\2\p\z\v\i\g\e\t\8\0\k\w\b\v\k\e\t\i\9\s\b\t\r\4\w\3\w\a\8\5\q\g\0\a\a\g\8\6\t\f\j\x\i\d\6\r\s\v\9\5\2\r\3\n\8\e\n\x\b\k\7\q\k\2\x\e\e\1\y\w\u\m\x\w\1\p\b\z\i\b\3\q\n\u\f\1\8\w\y\6\z\q\4\k\r\3\l\y\s\s\y\m\o\e\p\j\q\y\e\b\x\z\x\h\a\r\6\j\o\q\1\t\1\4\b\k\l\k\i\3\1\t\n\t\e\f\1\0\d\s\9\m\b\s\s\o\f\j\o\q\t\1\3\f\6\h\k\e\2\x\t\3\w\g\s\n\f\y\t\z\s\1\f\f\l\5\s\0\y\1\o\r\t\j\8\4\6\l\s\u\y\x\i\d\c\3\3\y\0\8\0\r\i\q\1\z\k\j\g\t\u\y\o\j\0\s\d\b\d\d\4\s\z\s\d\2\b\q\i\o\l\x\4\7\j\i\o\9\4\b\n\f\h\h\o\7\t\f\6\y\8\a\6\b\2\k\n\z\j\m\j\w\f\k\1\h\2\d\u\c\2\o\3\m\c\i\0\p\d\2\7\f\y\n\s\o\q\h\s\4\m\8\4\z\g\z\g\x\g\m\t\i\j\6\7\z\f\2\9\f\u\5\9\y\p\q\6\e\r\g\4\8\9\i\9\1\b\c\9\h\a\c\7\e\a\r\s\m\q\r\9\f\b\p\a\b\1\b\m\d\i\j\5\p\e\x\2\b\9\7\k\x\d\a\2\y\d\t\x\7\4\c\v\y\6\f\e\e\d\j\x\f\p\b\w\o\7\7\3\d\t\o\e\l\x\8\m\z\1\j\i\s\n\9\y\a\k\o\6\g\i\8\r\v\a\7\g\0\t\s\l\4\h\n\o\o\0\o\n\o\w\i\r\8\g\r\c\t\m\j\6\5\g\c\t\q\d\o\b\g\s\n\h\y\h\n\y\3\o\0\2\8\u\6\5\x\a\3\h\3\w\4\r\0\9\k\9\q\b\3\y\7\h\t\4\3\s\k\i\4\n\7\x\i\v\d\x\c\8\8\y\u\j\z\o\t\b\6\w\2\n\x\m\t\i\f\7\5\0\y\2\p\b\4\p\b\q\7\d\0\0\i\h\c\z\0\d\n\o\b\8\k\z\v\3\1\y\e\7\z\0\d\j\x\6\u\a\i\u\9\c\c\9\g\z\9\v\v\b\2\0\u\e\l\0\e\e\6\8\f\j\p\1\p\3\3\t\w\4\x\f\2\8\2\t\c\h\d\h\5\6\o\g\i\j\8\s\v\n\l\y\4\y\r\i\u\1\e\a\o\7\r\k\q\d\n\i\h\x\q\8\j\d\v\j\v\e\m\f\u\r\6\c\4\2\v\m\o\1\6\c\q\h\2\d\p\9\y\m\l\d\g\k\e\o\1\r\1\n\y\l\5\e\9\5\8\z\n\w\x\m\3\a\f\u\i\c\9\2\m\z\2\7\o\8\v\p\q\k\4\j\b\2\c\u\p\h\s\3\u\n\t\q\j\7\l\x\0\e\1\v\j\e\2\1\4\s\w\s\k\g\e\4\y\i\q\3\p\o\6\v\7\e\9\j\2\8\3\4\9\x\q\3\p\0\u\9\j\a\2\w\v\e\3\t\6\m\3\h\4\i\6\0\o\t\k\3\b\h\o\3\m\4\p\d\r\i\i\v\c\r\1\8\9\v\r\d\2\r\y\l\c\t\l\a\k\i\j\r\v\0\c\i\2\9\h\l\2\o\e\8\h\9\f\t\w\5\2\e\0\y\o\8\3\8\p\b\3\h\t\u\i\y\f\4\1\p\j\o\8\1\7\3\u\o\o\5\5\d\j\k\h\j\5\i\l\j\z\h\q\o\t\r\3\o\3\8\j\6\0\4\e\z\u\s\4\u\g\x\f\0\9\z\7\3\f\v\d\m\4\q\q\g\v\v\8\4\q\3\e\w\8\f\t\l\l\j\x\b\x\b\o\r\q\i\z\j\o\w\y\p\9\7\b\7\7\x\d\0\b\f\y\1\m\0\o\8\d\n\1\t\l\w\l\r\f\b\7\3\4\z\l\k\m\h\6\b\4\e\f\u\m\c\7\a\f\i\d\6\z\y\3\9\u\u\y\3\g\w\y\e\9\s\0\p\3\o\7\g\w\3\8\x\i\k\3\m\u\s\o\o\0\p\1\b\0\f\n\w\1\w\f\t\m\8\f\v\n\v\m\r\i\0\h\5\2\9\v\q\i\0\a\8\v\u\d\b\t\2\p\l\z\s\u\i\k\p\b\v\1\r\6\2\j\c\k\u\9\g\q\p\n\s\d\6\e\k\r\v\8\y\u\3\e\s\j\v\f\1\0\y\z\a\v\o\m\u\h\l\h\l\i\p\b\4\x\t\l\7\l\n\3\y\p\v\b\i\g\h\9\w\p\m\6\8\m\x\z\d\w\s\g\t\b\z\n\u\z\1\k\3\c\j\5\j\j\8\c\w\9\s\6\1\3\p\m\v\3\m\5\1\f\b\5\4\x\b\m\9\h\5\2\1\v\j\v\p\t\n\f\j\3\v\8\i\d\e\v\r\6\n\n\b\k\d\4\n\9\3\k\u\0\w\z\l\7\k\k\1\0\0\e\i\7\a\0\v\0\g\r\l\j\q\q\a\s\b\1\g\z\l\p\a\p\u\h\t\0\2\o\4\o\f\1\c\y\l\g\z\v\o\7\e\x\i\c\e\u\e\j\c\f\p\s\h\g\9\d\w\g\t\u\5\z\5\z\0\8\0\d\4\l\v\t\0\k\5\j\j\o\n\9\1\n\8\t\z\v\5\2\l\r\1\p\p\y\z\r\q\q\i\2\p\1\9\u\c\i\s\6\k\6\s\e\t\v\r\j\a\d\z\i\x\t\j\t\c\j\t\d\q\i\t\n\6\e\a\j\4\o\r\l\4\x\j\d\9\0\3\p\n\f\4\n\p\p\p\a\9\o\2\y\z\i\l\u\i\w\x\f\3\k\b\h\5\w\n\y\m\v\j\5\6\m\5\x\d\e\5\o\7\5\v\q\i\l\v\3\n\e\b\n\9\a\j\9\k\b\d\j\v\9\h\u\q\1\y\l\z\l\k\6\9\6\s\t\j\6\e\b\n\g\8\9\h\4\4\h\3\j\n\y\9\5\f\5\o\t\4\4\0\g\7\h\2\l\o\9\l\q\1\8\m\u\t\v\z\8\f\h\d\1\6\9\4\t\a\5\o\4\l\3\k\b\n\v\f\d\c\2\w\r\c\x\e\a\e\i\g\9\n\6\2\0\4\o\z\u\e\x\v\i\2\9\2\n\i\c\q\d\1\d\3\8\o\7\j\3\8\g\a\1\f\v\9\2\y\i\6\f\q\q\m\c\9\7\f\m\f\r\u\g\7\d\r\m\x\o\f\r\5\5\g\h\b\6\0\c\0\s\r\v\i\t\1\4\x\2\p\c\q\u\l\h\x\a\6\z\l\c\e\t\r\y\l\k\2\t\r\a\q\4\1\7\g\b\c\z\m\6\i\v\x\o\8\g\c\r\y\v\h\i\2\4\x\g\e\p\h\i\o\w\g\d\u\t\f\c\e\3\h\j\v\7\t\q\c\x\k\7\e\6\y\8\4\z\4\2\t\6\w\3\x\d\i\n\1\t\c\4\y\l\x\h\s\v\d\j\1\0\2\t\l\l\j\7\i\q\0\1\9\x\7\h\s\0\n\g\w\w\j\e\o\a\i\k\v\s\o\b\c\1\p\b\q\i\d\6\z\r\r\c\z\h\l\v\5\6\b\7\z\4\p\p\0\5\k\u\g\0\9\j\8\h\f\0\k\h\8\k\p\y\p\n\3\p\q\p\x\g\m\g\e\e\h\k\i\o\u\6\b\g\4\5\2\w\p\a\v\x\s\4\g\r\r\3\9\k\s\z\9\c\j\y\7\a\f\v\6\8\g\1\5\9\l\3\g\j\b\3\4\j\i\7\m\n\d\y\k\n\v\p\g\y\u ]] 00:15:32.386 ************************************ 00:15:32.386 END TEST dd_rw_offset 00:15:32.386 ************************************ 00:15:32.386 00:15:32.386 real 0m1.396s 00:15:32.386 user 0m0.752s 00:15:32.386 sys 0m0.374s 00:15:32.386 08:01:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:32.386 08:01:38 -- common/autotest_common.sh@10 -- # set +x 00:15:32.386 08:01:38 -- dd/basic_rw.sh@1 -- # cleanup 00:15:32.386 08:01:38 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:15:32.386 08:01:38 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:15:32.386 08:01:38 -- dd/common.sh@11 -- # local nvme_ref= 00:15:32.386 08:01:38 -- dd/common.sh@12 -- # local size=0xffff 00:15:32.386 08:01:38 -- dd/common.sh@14 -- # local bs=1048576 00:15:32.386 08:01:38 -- dd/common.sh@15 -- # local count=1 00:15:32.386 08:01:38 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:15:32.386 08:01:38 -- dd/common.sh@18 -- # gen_conf 00:15:32.386 08:01:38 -- dd/common.sh@31 -- # xtrace_disable 00:15:32.386 08:01:38 -- common/autotest_common.sh@10 -- # set +x 00:15:32.645 { 00:15:32.645 "subsystems": [ 00:15:32.645 { 00:15:32.645 "subsystem": "bdev", 00:15:32.645 "config": [ 00:15:32.645 { 00:15:32.645 "params": { 00:15:32.645 "trtype": "pcie", 00:15:32.645 "name": "Nvme0", 00:15:32.645 "traddr": "0000:00:06.0" 00:15:32.645 }, 00:15:32.645 "method": "bdev_nvme_attach_controller" 00:15:32.645 }, 00:15:32.645 { 00:15:32.645 "method": "bdev_wait_for_examine" 00:15:32.645 } 00:15:32.645 ] 00:15:32.645 } 00:15:32.645 ] 00:15:32.645 } 00:15:32.645 [2024-07-13 08:01:38.297282] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:32.645 [2024-07-13 08:01:38.297489] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68479 ] 00:15:32.645 [2024-07-13 08:01:38.427313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.903 [2024-07-13 08:01:38.476394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.163  Copying: 1024/1024 [kB] (average 1000 MBps) 00:15:33.163 00:15:33.163 08:01:38 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:33.163 ************************************ 00:15:33.163 END TEST spdk_dd_basic_rw 00:15:33.163 ************************************ 00:15:33.163 00:15:33.163 real 0m18.670s 00:15:33.163 user 0m10.814s 00:15:33.163 sys 0m4.627s 00:15:33.163 08:01:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:33.163 08:01:38 -- common/autotest_common.sh@10 -- # set +x 00:15:33.163 08:01:38 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:15:33.163 08:01:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:33.163 08:01:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:33.163 08:01:38 -- common/autotest_common.sh@10 -- # set +x 00:15:33.163 ************************************ 00:15:33.163 START TEST spdk_dd_posix 00:15:33.163 ************************************ 00:15:33.163 08:01:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:15:33.163 * Looking for test storage... 00:15:33.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:15:33.163 08:01:38 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:33.163 08:01:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.163 08:01:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.163 08:01:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.163 08:01:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:33.163 08:01:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:33.163 08:01:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:33.163 08:01:38 -- paths/export.sh@5 -- # export PATH 00:15:33.163 08:01:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:33.163 08:01:38 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:15:33.163 08:01:38 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:15:33.163 08:01:38 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:15:33.163 08:01:38 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:15:33.163 08:01:38 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:33.163 08:01:38 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:33.163 08:01:38 -- dd/posix.sh@130 -- # tests 00:15:33.163 08:01:38 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:15:33.163 * First test run, using AIO 00:15:33.163 08:01:38 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:15:33.163 08:01:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:33.163 08:01:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:33.163 08:01:38 -- common/autotest_common.sh@10 -- # set +x 00:15:33.431 ************************************ 00:15:33.431 START TEST dd_flag_append 00:15:33.431 ************************************ 00:15:33.431 08:01:38 -- common/autotest_common.sh@1104 -- # append 00:15:33.431 08:01:38 -- dd/posix.sh@16 -- # local dump0 00:15:33.431 08:01:38 -- dd/posix.sh@17 -- # local dump1 00:15:33.431 08:01:38 -- dd/posix.sh@19 -- # gen_bytes 32 00:15:33.431 08:01:38 -- dd/common.sh@98 -- # xtrace_disable 00:15:33.431 08:01:38 -- common/autotest_common.sh@10 -- # set +x 00:15:33.431 08:01:38 -- dd/posix.sh@19 -- # dump0=k1e3wcedvo6knxkvgbtpi68cdqhsy71e 00:15:33.431 08:01:38 -- dd/posix.sh@20 -- # gen_bytes 32 00:15:33.431 08:01:38 -- dd/common.sh@98 -- # xtrace_disable 00:15:33.431 08:01:38 -- common/autotest_common.sh@10 -- # set +x 00:15:33.431 08:01:38 -- dd/posix.sh@20 -- # dump1=7xwo1wewn0qca9qkk0l4c4pw0f4y49j3 00:15:33.431 08:01:38 -- dd/posix.sh@22 -- # printf %s k1e3wcedvo6knxkvgbtpi68cdqhsy71e 00:15:33.431 08:01:38 -- dd/posix.sh@23 -- # printf %s 7xwo1wewn0qca9qkk0l4c4pw0f4y49j3 00:15:33.431 08:01:38 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:15:33.431 [2024-07-13 08:01:39.120568] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:33.431 [2024-07-13 08:01:39.120759] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68557 ] 00:15:33.703 [2024-07-13 08:01:39.255119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.703 [2024-07-13 08:01:39.304305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.961  Copying: 32/32 [B] (average 31 kBps) 00:15:33.961 00:15:33.961 ************************************ 00:15:33.961 END TEST dd_flag_append 00:15:33.961 ************************************ 00:15:33.961 08:01:39 -- dd/posix.sh@27 -- # [[ 7xwo1wewn0qca9qkk0l4c4pw0f4y49j3k1e3wcedvo6knxkvgbtpi68cdqhsy71e == \7\x\w\o\1\w\e\w\n\0\q\c\a\9\q\k\k\0\l\4\c\4\p\w\0\f\4\y\4\9\j\3\k\1\e\3\w\c\e\d\v\o\6\k\n\x\k\v\g\b\t\p\i\6\8\c\d\q\h\s\y\7\1\e ]] 00:15:33.961 00:15:33.961 real 0m0.607s 00:15:33.961 user 0m0.229s 00:15:33.961 sys 0m0.173s 00:15:33.961 08:01:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:33.961 08:01:39 -- common/autotest_common.sh@10 -- # set +x 00:15:33.961 08:01:39 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:15:33.961 08:01:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:33.961 08:01:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:33.961 08:01:39 -- common/autotest_common.sh@10 -- # set +x 00:15:33.961 ************************************ 00:15:33.961 START TEST dd_flag_directory 00:15:33.961 ************************************ 00:15:33.961 08:01:39 -- common/autotest_common.sh@1104 -- # directory 00:15:33.961 08:01:39 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:33.961 08:01:39 -- common/autotest_common.sh@640 -- # local es=0 00:15:33.961 08:01:39 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:33.961 08:01:39 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:33.961 08:01:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:33.961 08:01:39 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:33.961 08:01:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:33.961 08:01:39 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:33.961 08:01:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:33.961 08:01:39 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:33.961 08:01:39 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:33.961 08:01:39 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:34.219 [2024-07-13 08:01:39.777854] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:34.219 [2024-07-13 08:01:39.778035] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68582 ] 00:15:34.219 [2024-07-13 08:01:39.910383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.219 [2024-07-13 08:01:39.959835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.477 [2024-07-13 08:01:40.038926] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:34.477 [2024-07-13 08:01:40.039002] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:34.477 [2024-07-13 08:01:40.039028] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:34.477 [2024-07-13 08:01:40.144048] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:15:34.477 08:01:40 -- common/autotest_common.sh@643 -- # es=236 00:15:34.477 08:01:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:34.477 08:01:40 -- common/autotest_common.sh@652 -- # es=108 00:15:34.477 08:01:40 -- common/autotest_common.sh@653 -- # case "$es" in 00:15:34.477 08:01:40 -- common/autotest_common.sh@660 -- # es=1 00:15:34.477 08:01:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:34.477 08:01:40 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:15:34.477 08:01:40 -- common/autotest_common.sh@640 -- # local es=0 00:15:34.477 08:01:40 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:15:34.477 08:01:40 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:34.477 08:01:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:34.477 08:01:40 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:34.477 08:01:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:34.477 08:01:40 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:34.477 08:01:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:34.477 08:01:40 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:34.477 08:01:40 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:34.477 08:01:40 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:15:34.734 [2024-07-13 08:01:40.372236] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:34.734 [2024-07-13 08:01:40.372778] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68603 ] 00:15:34.734 [2024-07-13 08:01:40.516838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.992 [2024-07-13 08:01:40.566152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.992 [2024-07-13 08:01:40.646925] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:34.992 [2024-07-13 08:01:40.646992] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:34.992 [2024-07-13 08:01:40.647017] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:34.992 [2024-07-13 08:01:40.751275] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:15:35.251 08:01:40 -- common/autotest_common.sh@643 -- # es=236 00:15:35.251 08:01:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:35.252 08:01:40 -- common/autotest_common.sh@652 -- # es=108 00:15:35.252 08:01:40 -- common/autotest_common.sh@653 -- # case "$es" in 00:15:35.252 08:01:40 -- common/autotest_common.sh@660 -- # es=1 00:15:35.252 08:01:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:35.252 00:15:35.252 real 0m1.201s 00:15:35.252 user 0m0.473s 00:15:35.252 sys 0m0.335s 00:15:35.252 08:01:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:35.252 ************************************ 00:15:35.252 END TEST dd_flag_directory 00:15:35.252 ************************************ 00:15:35.252 08:01:40 -- common/autotest_common.sh@10 -- # set +x 00:15:35.252 08:01:40 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:15:35.252 08:01:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:35.252 08:01:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:35.252 08:01:40 -- common/autotest_common.sh@10 -- # set +x 00:15:35.252 ************************************ 00:15:35.252 START TEST dd_flag_nofollow 00:15:35.252 ************************************ 00:15:35.252 08:01:40 -- common/autotest_common.sh@1104 -- # nofollow 00:15:35.252 08:01:40 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:15:35.252 08:01:40 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:15:35.252 08:01:40 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:15:35.252 08:01:40 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:15:35.252 08:01:40 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:35.252 08:01:40 -- common/autotest_common.sh@640 -- # local es=0 00:15:35.252 08:01:40 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:35.252 08:01:40 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:35.252 08:01:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:35.252 08:01:40 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:35.252 08:01:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:35.252 08:01:40 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:35.252 08:01:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:35.252 08:01:40 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:35.252 08:01:40 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:35.252 08:01:40 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:35.252 [2024-07-13 08:01:41.032272] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:35.252 [2024-07-13 08:01:41.032650] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68639 ] 00:15:35.511 [2024-07-13 08:01:41.172058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.511 [2024-07-13 08:01:41.221303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.511 [2024-07-13 08:01:41.299953] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:15:35.511 [2024-07-13 08:01:41.300034] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:15:35.511 [2024-07-13 08:01:41.300062] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:35.770 [2024-07-13 08:01:41.404620] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:15:35.770 08:01:41 -- common/autotest_common.sh@643 -- # es=216 00:15:35.770 08:01:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:35.770 08:01:41 -- common/autotest_common.sh@652 -- # es=88 00:15:35.770 08:01:41 -- common/autotest_common.sh@653 -- # case "$es" in 00:15:35.770 08:01:41 -- common/autotest_common.sh@660 -- # es=1 00:15:35.770 08:01:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:35.770 08:01:41 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:15:35.770 08:01:41 -- common/autotest_common.sh@640 -- # local es=0 00:15:35.770 08:01:41 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:15:35.770 08:01:41 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:35.770 08:01:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:35.770 08:01:41 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:35.770 08:01:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:35.770 08:01:41 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:35.770 08:01:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:35.770 08:01:41 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:35.770 08:01:41 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:35.770 08:01:41 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:15:36.028 [2024-07-13 08:01:41.631863] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:36.028 [2024-07-13 08:01:41.632094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68648 ] 00:15:36.028 [2024-07-13 08:01:41.772419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.028 [2024-07-13 08:01:41.822531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.286 [2024-07-13 08:01:41.901926] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:15:36.286 [2024-07-13 08:01:41.901998] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:15:36.286 [2024-07-13 08:01:41.902024] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:36.286 [2024-07-13 08:01:42.006528] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:15:36.286 08:01:42 -- common/autotest_common.sh@643 -- # es=216 00:15:36.286 08:01:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:36.286 08:01:42 -- common/autotest_common.sh@652 -- # es=88 00:15:36.286 08:01:42 -- common/autotest_common.sh@653 -- # case "$es" in 00:15:36.286 08:01:42 -- common/autotest_common.sh@660 -- # es=1 00:15:36.286 08:01:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:36.286 08:01:42 -- dd/posix.sh@46 -- # gen_bytes 512 00:15:36.286 08:01:42 -- dd/common.sh@98 -- # xtrace_disable 00:15:36.286 08:01:42 -- common/autotest_common.sh@10 -- # set +x 00:15:36.545 08:01:42 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:36.545 [2024-07-13 08:01:42.235224] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:36.545 [2024-07-13 08:01:42.235401] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68664 ] 00:15:36.803 [2024-07-13 08:01:42.372602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.803 [2024-07-13 08:01:42.422132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.063  Copying: 512/512 [B] (average 500 kBps) 00:15:37.063 00:15:37.063 ************************************ 00:15:37.063 END TEST dd_flag_nofollow 00:15:37.063 ************************************ 00:15:37.063 08:01:42 -- dd/posix.sh@49 -- # [[ 4kwdzemljfxqoksox106ifc7qt27blkm1egzn2vyjf2f9gl3vdjgq651cc345gl9qheljhhid02ma0smkkw4vnimjt4jm9xkzlkz1vsvyl575xsu8k6xjw0bp4kjhgat41cknrpebu36h39qc0pl7vhjbs7b2spghr5e465hnf32u9ygj1m15z7ofutc75xe2rlvacpdxpe94c131rd4gcsys1193rmg3vg7r8ad5c5s8s2pibexy5paj5vmy8jxm5owuh6znfg5jvp8s8rqrt3g3ig14deztea8cqnzt198es0fhidju4p14o86mij0h1fxdlihsd4y4h92byl1ol95qajvmzl6dnd3o7futq6vlmp587z95wc4ig3va6o7n7gs65muv4e8j3y3gk91v0ovlff65n6sakscs09y2gnbubq75ci4c5q8yytkgntqk5evslp3x2f593dt83m82omrqtdaf9xm0ugcchsxginpgheem5732wbd47b03ro7 == \4\k\w\d\z\e\m\l\j\f\x\q\o\k\s\o\x\1\0\6\i\f\c\7\q\t\2\7\b\l\k\m\1\e\g\z\n\2\v\y\j\f\2\f\9\g\l\3\v\d\j\g\q\6\5\1\c\c\3\4\5\g\l\9\q\h\e\l\j\h\h\i\d\0\2\m\a\0\s\m\k\k\w\4\v\n\i\m\j\t\4\j\m\9\x\k\z\l\k\z\1\v\s\v\y\l\5\7\5\x\s\u\8\k\6\x\j\w\0\b\p\4\k\j\h\g\a\t\4\1\c\k\n\r\p\e\b\u\3\6\h\3\9\q\c\0\p\l\7\v\h\j\b\s\7\b\2\s\p\g\h\r\5\e\4\6\5\h\n\f\3\2\u\9\y\g\j\1\m\1\5\z\7\o\f\u\t\c\7\5\x\e\2\r\l\v\a\c\p\d\x\p\e\9\4\c\1\3\1\r\d\4\g\c\s\y\s\1\1\9\3\r\m\g\3\v\g\7\r\8\a\d\5\c\5\s\8\s\2\p\i\b\e\x\y\5\p\a\j\5\v\m\y\8\j\x\m\5\o\w\u\h\6\z\n\f\g\5\j\v\p\8\s\8\r\q\r\t\3\g\3\i\g\1\4\d\e\z\t\e\a\8\c\q\n\z\t\1\9\8\e\s\0\f\h\i\d\j\u\4\p\1\4\o\8\6\m\i\j\0\h\1\f\x\d\l\i\h\s\d\4\y\4\h\9\2\b\y\l\1\o\l\9\5\q\a\j\v\m\z\l\6\d\n\d\3\o\7\f\u\t\q\6\v\l\m\p\5\8\7\z\9\5\w\c\4\i\g\3\v\a\6\o\7\n\7\g\s\6\5\m\u\v\4\e\8\j\3\y\3\g\k\9\1\v\0\o\v\l\f\f\6\5\n\6\s\a\k\s\c\s\0\9\y\2\g\n\b\u\b\q\7\5\c\i\4\c\5\q\8\y\y\t\k\g\n\t\q\k\5\e\v\s\l\p\3\x\2\f\5\9\3\d\t\8\3\m\8\2\o\m\r\q\t\d\a\f\9\x\m\0\u\g\c\c\h\s\x\g\i\n\p\g\h\e\e\m\5\7\3\2\w\b\d\4\7\b\0\3\r\o\7 ]] 00:15:37.063 00:15:37.063 real 0m1.812s 00:15:37.063 user 0m0.712s 00:15:37.063 sys 0m0.502s 00:15:37.063 08:01:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:37.063 08:01:42 -- common/autotest_common.sh@10 -- # set +x 00:15:37.063 08:01:42 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:15:37.063 08:01:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:37.063 08:01:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:37.063 08:01:42 -- common/autotest_common.sh@10 -- # set +x 00:15:37.063 ************************************ 00:15:37.063 START TEST dd_flag_noatime 00:15:37.063 ************************************ 00:15:37.063 08:01:42 -- common/autotest_common.sh@1104 -- # noatime 00:15:37.063 08:01:42 -- dd/posix.sh@53 -- # local atime_if 00:15:37.063 08:01:42 -- dd/posix.sh@54 -- # local atime_of 00:15:37.063 08:01:42 -- dd/posix.sh@58 -- # gen_bytes 512 00:15:37.063 08:01:42 -- dd/common.sh@98 -- # xtrace_disable 00:15:37.063 08:01:42 -- common/autotest_common.sh@10 -- # set +x 00:15:37.063 08:01:42 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:37.063 08:01:42 -- dd/posix.sh@60 -- # atime_if=1720857702 00:15:37.063 08:01:42 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:37.063 08:01:42 -- dd/posix.sh@61 -- # atime_of=1720857702 00:15:37.063 08:01:42 -- dd/posix.sh@66 -- # sleep 1 00:15:37.997 08:01:43 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:38.255 [2024-07-13 08:01:43.923623] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:38.255 [2024-07-13 08:01:43.923811] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68710 ] 00:15:38.255 [2024-07-13 08:01:44.054421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.513 [2024-07-13 08:01:44.104942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.772  Copying: 512/512 [B] (average 500 kBps) 00:15:38.772 00:15:38.772 08:01:44 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:38.772 08:01:44 -- dd/posix.sh@69 -- # (( atime_if == 1720857702 )) 00:15:38.772 08:01:44 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:38.772 08:01:44 -- dd/posix.sh@70 -- # (( atime_of == 1720857702 )) 00:15:38.772 08:01:44 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:38.772 [2024-07-13 08:01:44.533166] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:38.772 [2024-07-13 08:01:44.533336] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68728 ] 00:15:39.037 [2024-07-13 08:01:44.663446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.037 [2024-07-13 08:01:44.712355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.296  Copying: 512/512 [B] (average 500 kBps) 00:15:39.297 00:15:39.297 08:01:44 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:39.297 ************************************ 00:15:39.297 END TEST dd_flag_noatime 00:15:39.297 ************************************ 00:15:39.297 08:01:44 -- dd/posix.sh@73 -- # (( atime_if < 1720857704 )) 00:15:39.297 00:15:39.297 real 0m2.236s 00:15:39.297 user 0m0.483s 00:15:39.297 sys 0m0.347s 00:15:39.297 08:01:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:39.297 08:01:44 -- common/autotest_common.sh@10 -- # set +x 00:15:39.297 08:01:45 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:15:39.297 08:01:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:39.297 08:01:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:39.297 08:01:45 -- common/autotest_common.sh@10 -- # set +x 00:15:39.297 ************************************ 00:15:39.297 START TEST dd_flags_misc 00:15:39.297 ************************************ 00:15:39.297 08:01:45 -- common/autotest_common.sh@1104 -- # io 00:15:39.297 08:01:45 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:15:39.297 08:01:45 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:15:39.297 08:01:45 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:15:39.297 08:01:45 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:15:39.297 08:01:45 -- dd/posix.sh@86 -- # gen_bytes 512 00:15:39.297 08:01:45 -- dd/common.sh@98 -- # xtrace_disable 00:15:39.297 08:01:45 -- common/autotest_common.sh@10 -- # set +x 00:15:39.297 08:01:45 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:39.297 08:01:45 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:15:39.555 [2024-07-13 08:01:45.196040] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:39.555 [2024-07-13 08:01:45.196233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68764 ] 00:15:39.555 [2024-07-13 08:01:45.326291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.814 [2024-07-13 08:01:45.375364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.073  Copying: 512/512 [B] (average 500 kBps) 00:15:40.073 00:15:40.073 08:01:45 -- dd/posix.sh@93 -- # [[ 0dgr2191ev5r31kflup3r481zjdbjzwej1qm7gojsr9iheeti0aaa38jur74gufvudiyqvsfxo6gcd0u33pbqgzpw4y21su1yur6b0oeuxa9647z2ejjiowitj9qt25jfxcs37s5mesz0ulpnduhgz6z1hb1t1e6mpzolxf325yfwkp3t9lwajhacr6kke8yd4jkfxy5owl3do94q3u3mh507apg78luxlhcrx23a5mwjpqf6h5iadhawqo67fvyc7waa6xbptov7j83p40i8udkwzo47xax0zcxut8hqxwr9gor9qrftpqs9piu2nyirtcw0phz5eqj18pbflxrbxjxkqkbnxoe96fpxjiiqc1ibsqhpugb8z9yaerawlpj04o2gs7r5yrzn4rzvcv40srhdg7pin4fqwfpp8rxipn4vs8yvvwxdj5yag966kwihpwa2ubp1eu3e44hyugfpxh7f1jbl4yfnx0wszf6qgnj1w94k7qc9s4f90hvcnnb == \0\d\g\r\2\1\9\1\e\v\5\r\3\1\k\f\l\u\p\3\r\4\8\1\z\j\d\b\j\z\w\e\j\1\q\m\7\g\o\j\s\r\9\i\h\e\e\t\i\0\a\a\a\3\8\j\u\r\7\4\g\u\f\v\u\d\i\y\q\v\s\f\x\o\6\g\c\d\0\u\3\3\p\b\q\g\z\p\w\4\y\2\1\s\u\1\y\u\r\6\b\0\o\e\u\x\a\9\6\4\7\z\2\e\j\j\i\o\w\i\t\j\9\q\t\2\5\j\f\x\c\s\3\7\s\5\m\e\s\z\0\u\l\p\n\d\u\h\g\z\6\z\1\h\b\1\t\1\e\6\m\p\z\o\l\x\f\3\2\5\y\f\w\k\p\3\t\9\l\w\a\j\h\a\c\r\6\k\k\e\8\y\d\4\j\k\f\x\y\5\o\w\l\3\d\o\9\4\q\3\u\3\m\h\5\0\7\a\p\g\7\8\l\u\x\l\h\c\r\x\2\3\a\5\m\w\j\p\q\f\6\h\5\i\a\d\h\a\w\q\o\6\7\f\v\y\c\7\w\a\a\6\x\b\p\t\o\v\7\j\8\3\p\4\0\i\8\u\d\k\w\z\o\4\7\x\a\x\0\z\c\x\u\t\8\h\q\x\w\r\9\g\o\r\9\q\r\f\t\p\q\s\9\p\i\u\2\n\y\i\r\t\c\w\0\p\h\z\5\e\q\j\1\8\p\b\f\l\x\r\b\x\j\x\k\q\k\b\n\x\o\e\9\6\f\p\x\j\i\i\q\c\1\i\b\s\q\h\p\u\g\b\8\z\9\y\a\e\r\a\w\l\p\j\0\4\o\2\g\s\7\r\5\y\r\z\n\4\r\z\v\c\v\4\0\s\r\h\d\g\7\p\i\n\4\f\q\w\f\p\p\8\r\x\i\p\n\4\v\s\8\y\v\v\w\x\d\j\5\y\a\g\9\6\6\k\w\i\h\p\w\a\2\u\b\p\1\e\u\3\e\4\4\h\y\u\g\f\p\x\h\7\f\1\j\b\l\4\y\f\n\x\0\w\s\z\f\6\q\g\n\j\1\w\9\4\k\7\q\c\9\s\4\f\9\0\h\v\c\n\n\b ]] 00:15:40.073 08:01:45 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:40.073 08:01:45 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:15:40.073 [2024-07-13 08:01:45.790407] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:40.073 [2024-07-13 08:01:45.790620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68775 ] 00:15:40.332 [2024-07-13 08:01:45.922538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.332 [2024-07-13 08:01:45.971686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.592  Copying: 512/512 [B] (average 500 kBps) 00:15:40.592 00:15:40.592 08:01:46 -- dd/posix.sh@93 -- # [[ 0dgr2191ev5r31kflup3r481zjdbjzwej1qm7gojsr9iheeti0aaa38jur74gufvudiyqvsfxo6gcd0u33pbqgzpw4y21su1yur6b0oeuxa9647z2ejjiowitj9qt25jfxcs37s5mesz0ulpnduhgz6z1hb1t1e6mpzolxf325yfwkp3t9lwajhacr6kke8yd4jkfxy5owl3do94q3u3mh507apg78luxlhcrx23a5mwjpqf6h5iadhawqo67fvyc7waa6xbptov7j83p40i8udkwzo47xax0zcxut8hqxwr9gor9qrftpqs9piu2nyirtcw0phz5eqj18pbflxrbxjxkqkbnxoe96fpxjiiqc1ibsqhpugb8z9yaerawlpj04o2gs7r5yrzn4rzvcv40srhdg7pin4fqwfpp8rxipn4vs8yvvwxdj5yag966kwihpwa2ubp1eu3e44hyugfpxh7f1jbl4yfnx0wszf6qgnj1w94k7qc9s4f90hvcnnb == \0\d\g\r\2\1\9\1\e\v\5\r\3\1\k\f\l\u\p\3\r\4\8\1\z\j\d\b\j\z\w\e\j\1\q\m\7\g\o\j\s\r\9\i\h\e\e\t\i\0\a\a\a\3\8\j\u\r\7\4\g\u\f\v\u\d\i\y\q\v\s\f\x\o\6\g\c\d\0\u\3\3\p\b\q\g\z\p\w\4\y\2\1\s\u\1\y\u\r\6\b\0\o\e\u\x\a\9\6\4\7\z\2\e\j\j\i\o\w\i\t\j\9\q\t\2\5\j\f\x\c\s\3\7\s\5\m\e\s\z\0\u\l\p\n\d\u\h\g\z\6\z\1\h\b\1\t\1\e\6\m\p\z\o\l\x\f\3\2\5\y\f\w\k\p\3\t\9\l\w\a\j\h\a\c\r\6\k\k\e\8\y\d\4\j\k\f\x\y\5\o\w\l\3\d\o\9\4\q\3\u\3\m\h\5\0\7\a\p\g\7\8\l\u\x\l\h\c\r\x\2\3\a\5\m\w\j\p\q\f\6\h\5\i\a\d\h\a\w\q\o\6\7\f\v\y\c\7\w\a\a\6\x\b\p\t\o\v\7\j\8\3\p\4\0\i\8\u\d\k\w\z\o\4\7\x\a\x\0\z\c\x\u\t\8\h\q\x\w\r\9\g\o\r\9\q\r\f\t\p\q\s\9\p\i\u\2\n\y\i\r\t\c\w\0\p\h\z\5\e\q\j\1\8\p\b\f\l\x\r\b\x\j\x\k\q\k\b\n\x\o\e\9\6\f\p\x\j\i\i\q\c\1\i\b\s\q\h\p\u\g\b\8\z\9\y\a\e\r\a\w\l\p\j\0\4\o\2\g\s\7\r\5\y\r\z\n\4\r\z\v\c\v\4\0\s\r\h\d\g\7\p\i\n\4\f\q\w\f\p\p\8\r\x\i\p\n\4\v\s\8\y\v\v\w\x\d\j\5\y\a\g\9\6\6\k\w\i\h\p\w\a\2\u\b\p\1\e\u\3\e\4\4\h\y\u\g\f\p\x\h\7\f\1\j\b\l\4\y\f\n\x\0\w\s\z\f\6\q\g\n\j\1\w\9\4\k\7\q\c\9\s\4\f\9\0\h\v\c\n\n\b ]] 00:15:40.592 08:01:46 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:40.592 08:01:46 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:15:40.592 [2024-07-13 08:01:46.391684] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:40.592 [2024-07-13 08:01:46.391864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68790 ] 00:15:40.849 [2024-07-13 08:01:46.521590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.849 [2024-07-13 08:01:46.565102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.107  Copying: 512/512 [B] (average 100 kBps) 00:15:41.107 00:15:41.107 08:01:46 -- dd/posix.sh@93 -- # [[ 0dgr2191ev5r31kflup3r481zjdbjzwej1qm7gojsr9iheeti0aaa38jur74gufvudiyqvsfxo6gcd0u33pbqgzpw4y21su1yur6b0oeuxa9647z2ejjiowitj9qt25jfxcs37s5mesz0ulpnduhgz6z1hb1t1e6mpzolxf325yfwkp3t9lwajhacr6kke8yd4jkfxy5owl3do94q3u3mh507apg78luxlhcrx23a5mwjpqf6h5iadhawqo67fvyc7waa6xbptov7j83p40i8udkwzo47xax0zcxut8hqxwr9gor9qrftpqs9piu2nyirtcw0phz5eqj18pbflxrbxjxkqkbnxoe96fpxjiiqc1ibsqhpugb8z9yaerawlpj04o2gs7r5yrzn4rzvcv40srhdg7pin4fqwfpp8rxipn4vs8yvvwxdj5yag966kwihpwa2ubp1eu3e44hyugfpxh7f1jbl4yfnx0wszf6qgnj1w94k7qc9s4f90hvcnnb == \0\d\g\r\2\1\9\1\e\v\5\r\3\1\k\f\l\u\p\3\r\4\8\1\z\j\d\b\j\z\w\e\j\1\q\m\7\g\o\j\s\r\9\i\h\e\e\t\i\0\a\a\a\3\8\j\u\r\7\4\g\u\f\v\u\d\i\y\q\v\s\f\x\o\6\g\c\d\0\u\3\3\p\b\q\g\z\p\w\4\y\2\1\s\u\1\y\u\r\6\b\0\o\e\u\x\a\9\6\4\7\z\2\e\j\j\i\o\w\i\t\j\9\q\t\2\5\j\f\x\c\s\3\7\s\5\m\e\s\z\0\u\l\p\n\d\u\h\g\z\6\z\1\h\b\1\t\1\e\6\m\p\z\o\l\x\f\3\2\5\y\f\w\k\p\3\t\9\l\w\a\j\h\a\c\r\6\k\k\e\8\y\d\4\j\k\f\x\y\5\o\w\l\3\d\o\9\4\q\3\u\3\m\h\5\0\7\a\p\g\7\8\l\u\x\l\h\c\r\x\2\3\a\5\m\w\j\p\q\f\6\h\5\i\a\d\h\a\w\q\o\6\7\f\v\y\c\7\w\a\a\6\x\b\p\t\o\v\7\j\8\3\p\4\0\i\8\u\d\k\w\z\o\4\7\x\a\x\0\z\c\x\u\t\8\h\q\x\w\r\9\g\o\r\9\q\r\f\t\p\q\s\9\p\i\u\2\n\y\i\r\t\c\w\0\p\h\z\5\e\q\j\1\8\p\b\f\l\x\r\b\x\j\x\k\q\k\b\n\x\o\e\9\6\f\p\x\j\i\i\q\c\1\i\b\s\q\h\p\u\g\b\8\z\9\y\a\e\r\a\w\l\p\j\0\4\o\2\g\s\7\r\5\y\r\z\n\4\r\z\v\c\v\4\0\s\r\h\d\g\7\p\i\n\4\f\q\w\f\p\p\8\r\x\i\p\n\4\v\s\8\y\v\v\w\x\d\j\5\y\a\g\9\6\6\k\w\i\h\p\w\a\2\u\b\p\1\e\u\3\e\4\4\h\y\u\g\f\p\x\h\7\f\1\j\b\l\4\y\f\n\x\0\w\s\z\f\6\q\g\n\j\1\w\9\4\k\7\q\c\9\s\4\f\9\0\h\v\c\n\n\b ]] 00:15:41.107 08:01:46 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:41.107 08:01:46 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:15:41.372 [2024-07-13 08:01:46.989223] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:41.372 [2024-07-13 08:01:46.989411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68799 ] 00:15:41.372 [2024-07-13 08:01:47.122672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.372 [2024-07-13 08:01:47.171608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.631  Copying: 512/512 [B] (average 250 kBps) 00:15:41.631 00:15:41.890 08:01:47 -- dd/posix.sh@93 -- # [[ 0dgr2191ev5r31kflup3r481zjdbjzwej1qm7gojsr9iheeti0aaa38jur74gufvudiyqvsfxo6gcd0u33pbqgzpw4y21su1yur6b0oeuxa9647z2ejjiowitj9qt25jfxcs37s5mesz0ulpnduhgz6z1hb1t1e6mpzolxf325yfwkp3t9lwajhacr6kke8yd4jkfxy5owl3do94q3u3mh507apg78luxlhcrx23a5mwjpqf6h5iadhawqo67fvyc7waa6xbptov7j83p40i8udkwzo47xax0zcxut8hqxwr9gor9qrftpqs9piu2nyirtcw0phz5eqj18pbflxrbxjxkqkbnxoe96fpxjiiqc1ibsqhpugb8z9yaerawlpj04o2gs7r5yrzn4rzvcv40srhdg7pin4fqwfpp8rxipn4vs8yvvwxdj5yag966kwihpwa2ubp1eu3e44hyugfpxh7f1jbl4yfnx0wszf6qgnj1w94k7qc9s4f90hvcnnb == \0\d\g\r\2\1\9\1\e\v\5\r\3\1\k\f\l\u\p\3\r\4\8\1\z\j\d\b\j\z\w\e\j\1\q\m\7\g\o\j\s\r\9\i\h\e\e\t\i\0\a\a\a\3\8\j\u\r\7\4\g\u\f\v\u\d\i\y\q\v\s\f\x\o\6\g\c\d\0\u\3\3\p\b\q\g\z\p\w\4\y\2\1\s\u\1\y\u\r\6\b\0\o\e\u\x\a\9\6\4\7\z\2\e\j\j\i\o\w\i\t\j\9\q\t\2\5\j\f\x\c\s\3\7\s\5\m\e\s\z\0\u\l\p\n\d\u\h\g\z\6\z\1\h\b\1\t\1\e\6\m\p\z\o\l\x\f\3\2\5\y\f\w\k\p\3\t\9\l\w\a\j\h\a\c\r\6\k\k\e\8\y\d\4\j\k\f\x\y\5\o\w\l\3\d\o\9\4\q\3\u\3\m\h\5\0\7\a\p\g\7\8\l\u\x\l\h\c\r\x\2\3\a\5\m\w\j\p\q\f\6\h\5\i\a\d\h\a\w\q\o\6\7\f\v\y\c\7\w\a\a\6\x\b\p\t\o\v\7\j\8\3\p\4\0\i\8\u\d\k\w\z\o\4\7\x\a\x\0\z\c\x\u\t\8\h\q\x\w\r\9\g\o\r\9\q\r\f\t\p\q\s\9\p\i\u\2\n\y\i\r\t\c\w\0\p\h\z\5\e\q\j\1\8\p\b\f\l\x\r\b\x\j\x\k\q\k\b\n\x\o\e\9\6\f\p\x\j\i\i\q\c\1\i\b\s\q\h\p\u\g\b\8\z\9\y\a\e\r\a\w\l\p\j\0\4\o\2\g\s\7\r\5\y\r\z\n\4\r\z\v\c\v\4\0\s\r\h\d\g\7\p\i\n\4\f\q\w\f\p\p\8\r\x\i\p\n\4\v\s\8\y\v\v\w\x\d\j\5\y\a\g\9\6\6\k\w\i\h\p\w\a\2\u\b\p\1\e\u\3\e\4\4\h\y\u\g\f\p\x\h\7\f\1\j\b\l\4\y\f\n\x\0\w\s\z\f\6\q\g\n\j\1\w\9\4\k\7\q\c\9\s\4\f\9\0\h\v\c\n\n\b ]] 00:15:41.890 08:01:47 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:15:41.890 08:01:47 -- dd/posix.sh@86 -- # gen_bytes 512 00:15:41.890 08:01:47 -- dd/common.sh@98 -- # xtrace_disable 00:15:41.890 08:01:47 -- common/autotest_common.sh@10 -- # set +x 00:15:41.890 08:01:47 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:41.890 08:01:47 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:15:41.890 [2024-07-13 08:01:47.596003] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:41.890 [2024-07-13 08:01:47.596210] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68812 ] 00:15:42.148 [2024-07-13 08:01:47.730334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.148 [2024-07-13 08:01:47.779881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.408  Copying: 512/512 [B] (average 500 kBps) 00:15:42.408 00:15:42.408 08:01:48 -- dd/posix.sh@93 -- # [[ 0n7xsoxj3x7jkdvc6ptv5mn684kzh3z6jhmzl7c9u7zge8ycaukrvv0i4cu0346smeucuohb4tfbny1nel8s2ulknvhfukyeyqzx34dufpah74paohxpv3jcs7ir94wfqqv3msjbles4yk5w5qchmgi2ecqw4eh28rpwzcodj9h7utfd83sqodcrt817hy6x90m3czbn48lrnmpb5gxq5gbranpjavnp9lnf1iz7gluq9ttepz2do46omv9jkw19wtxi58hzw45rf4dev2532msyp3ymj3edonmnvsjhepodbdqbu1oc4f6ojzoga5c2ilj0v1k2mn8w3gbg6kr1nrx9ap0w4a6w4rww2nnl6bnmh48abybmr95s5xc6k7t1ydgaq46l1gt53qsewr532apnyzhqlzomcea4uudi803tr1mlepv1hi8ezejkd0282jqdnm8k9yuna754ueje98bl31ua7nidph755ipycb3z5qglns3j8pt1k237x4z0 == \0\n\7\x\s\o\x\j\3\x\7\j\k\d\v\c\6\p\t\v\5\m\n\6\8\4\k\z\h\3\z\6\j\h\m\z\l\7\c\9\u\7\z\g\e\8\y\c\a\u\k\r\v\v\0\i\4\c\u\0\3\4\6\s\m\e\u\c\u\o\h\b\4\t\f\b\n\y\1\n\e\l\8\s\2\u\l\k\n\v\h\f\u\k\y\e\y\q\z\x\3\4\d\u\f\p\a\h\7\4\p\a\o\h\x\p\v\3\j\c\s\7\i\r\9\4\w\f\q\q\v\3\m\s\j\b\l\e\s\4\y\k\5\w\5\q\c\h\m\g\i\2\e\c\q\w\4\e\h\2\8\r\p\w\z\c\o\d\j\9\h\7\u\t\f\d\8\3\s\q\o\d\c\r\t\8\1\7\h\y\6\x\9\0\m\3\c\z\b\n\4\8\l\r\n\m\p\b\5\g\x\q\5\g\b\r\a\n\p\j\a\v\n\p\9\l\n\f\1\i\z\7\g\l\u\q\9\t\t\e\p\z\2\d\o\4\6\o\m\v\9\j\k\w\1\9\w\t\x\i\5\8\h\z\w\4\5\r\f\4\d\e\v\2\5\3\2\m\s\y\p\3\y\m\j\3\e\d\o\n\m\n\v\s\j\h\e\p\o\d\b\d\q\b\u\1\o\c\4\f\6\o\j\z\o\g\a\5\c\2\i\l\j\0\v\1\k\2\m\n\8\w\3\g\b\g\6\k\r\1\n\r\x\9\a\p\0\w\4\a\6\w\4\r\w\w\2\n\n\l\6\b\n\m\h\4\8\a\b\y\b\m\r\9\5\s\5\x\c\6\k\7\t\1\y\d\g\a\q\4\6\l\1\g\t\5\3\q\s\e\w\r\5\3\2\a\p\n\y\z\h\q\l\z\o\m\c\e\a\4\u\u\d\i\8\0\3\t\r\1\m\l\e\p\v\1\h\i\8\e\z\e\j\k\d\0\2\8\2\j\q\d\n\m\8\k\9\y\u\n\a\7\5\4\u\e\j\e\9\8\b\l\3\1\u\a\7\n\i\d\p\h\7\5\5\i\p\y\c\b\3\z\5\q\g\l\n\s\3\j\8\p\t\1\k\2\3\7\x\4\z\0 ]] 00:15:42.408 08:01:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:42.408 08:01:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:15:42.408 [2024-07-13 08:01:48.195842] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:42.408 [2024-07-13 08:01:48.196025] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68828 ] 00:15:42.667 [2024-07-13 08:01:48.328546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.667 [2024-07-13 08:01:48.377013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.926  Copying: 512/512 [B] (average 500 kBps) 00:15:42.926 00:15:42.926 08:01:48 -- dd/posix.sh@93 -- # [[ 0n7xsoxj3x7jkdvc6ptv5mn684kzh3z6jhmzl7c9u7zge8ycaukrvv0i4cu0346smeucuohb4tfbny1nel8s2ulknvhfukyeyqzx34dufpah74paohxpv3jcs7ir94wfqqv3msjbles4yk5w5qchmgi2ecqw4eh28rpwzcodj9h7utfd83sqodcrt817hy6x90m3czbn48lrnmpb5gxq5gbranpjavnp9lnf1iz7gluq9ttepz2do46omv9jkw19wtxi58hzw45rf4dev2532msyp3ymj3edonmnvsjhepodbdqbu1oc4f6ojzoga5c2ilj0v1k2mn8w3gbg6kr1nrx9ap0w4a6w4rww2nnl6bnmh48abybmr95s5xc6k7t1ydgaq46l1gt53qsewr532apnyzhqlzomcea4uudi803tr1mlepv1hi8ezejkd0282jqdnm8k9yuna754ueje98bl31ua7nidph755ipycb3z5qglns3j8pt1k237x4z0 == \0\n\7\x\s\o\x\j\3\x\7\j\k\d\v\c\6\p\t\v\5\m\n\6\8\4\k\z\h\3\z\6\j\h\m\z\l\7\c\9\u\7\z\g\e\8\y\c\a\u\k\r\v\v\0\i\4\c\u\0\3\4\6\s\m\e\u\c\u\o\h\b\4\t\f\b\n\y\1\n\e\l\8\s\2\u\l\k\n\v\h\f\u\k\y\e\y\q\z\x\3\4\d\u\f\p\a\h\7\4\p\a\o\h\x\p\v\3\j\c\s\7\i\r\9\4\w\f\q\q\v\3\m\s\j\b\l\e\s\4\y\k\5\w\5\q\c\h\m\g\i\2\e\c\q\w\4\e\h\2\8\r\p\w\z\c\o\d\j\9\h\7\u\t\f\d\8\3\s\q\o\d\c\r\t\8\1\7\h\y\6\x\9\0\m\3\c\z\b\n\4\8\l\r\n\m\p\b\5\g\x\q\5\g\b\r\a\n\p\j\a\v\n\p\9\l\n\f\1\i\z\7\g\l\u\q\9\t\t\e\p\z\2\d\o\4\6\o\m\v\9\j\k\w\1\9\w\t\x\i\5\8\h\z\w\4\5\r\f\4\d\e\v\2\5\3\2\m\s\y\p\3\y\m\j\3\e\d\o\n\m\n\v\s\j\h\e\p\o\d\b\d\q\b\u\1\o\c\4\f\6\o\j\z\o\g\a\5\c\2\i\l\j\0\v\1\k\2\m\n\8\w\3\g\b\g\6\k\r\1\n\r\x\9\a\p\0\w\4\a\6\w\4\r\w\w\2\n\n\l\6\b\n\m\h\4\8\a\b\y\b\m\r\9\5\s\5\x\c\6\k\7\t\1\y\d\g\a\q\4\6\l\1\g\t\5\3\q\s\e\w\r\5\3\2\a\p\n\y\z\h\q\l\z\o\m\c\e\a\4\u\u\d\i\8\0\3\t\r\1\m\l\e\p\v\1\h\i\8\e\z\e\j\k\d\0\2\8\2\j\q\d\n\m\8\k\9\y\u\n\a\7\5\4\u\e\j\e\9\8\b\l\3\1\u\a\7\n\i\d\p\h\7\5\5\i\p\y\c\b\3\z\5\q\g\l\n\s\3\j\8\p\t\1\k\2\3\7\x\4\z\0 ]] 00:15:42.926 08:01:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:42.926 08:01:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:15:43.185 [2024-07-13 08:01:48.799826] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:43.185 [2024-07-13 08:01:48.800010] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68834 ] 00:15:43.185 [2024-07-13 08:01:48.932100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.185 [2024-07-13 08:01:48.981404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.443  Copying: 512/512 [B] (average 250 kBps) 00:15:43.443 00:15:43.702 08:01:49 -- dd/posix.sh@93 -- # [[ 0n7xsoxj3x7jkdvc6ptv5mn684kzh3z6jhmzl7c9u7zge8ycaukrvv0i4cu0346smeucuohb4tfbny1nel8s2ulknvhfukyeyqzx34dufpah74paohxpv3jcs7ir94wfqqv3msjbles4yk5w5qchmgi2ecqw4eh28rpwzcodj9h7utfd83sqodcrt817hy6x90m3czbn48lrnmpb5gxq5gbranpjavnp9lnf1iz7gluq9ttepz2do46omv9jkw19wtxi58hzw45rf4dev2532msyp3ymj3edonmnvsjhepodbdqbu1oc4f6ojzoga5c2ilj0v1k2mn8w3gbg6kr1nrx9ap0w4a6w4rww2nnl6bnmh48abybmr95s5xc6k7t1ydgaq46l1gt53qsewr532apnyzhqlzomcea4uudi803tr1mlepv1hi8ezejkd0282jqdnm8k9yuna754ueje98bl31ua7nidph755ipycb3z5qglns3j8pt1k237x4z0 == \0\n\7\x\s\o\x\j\3\x\7\j\k\d\v\c\6\p\t\v\5\m\n\6\8\4\k\z\h\3\z\6\j\h\m\z\l\7\c\9\u\7\z\g\e\8\y\c\a\u\k\r\v\v\0\i\4\c\u\0\3\4\6\s\m\e\u\c\u\o\h\b\4\t\f\b\n\y\1\n\e\l\8\s\2\u\l\k\n\v\h\f\u\k\y\e\y\q\z\x\3\4\d\u\f\p\a\h\7\4\p\a\o\h\x\p\v\3\j\c\s\7\i\r\9\4\w\f\q\q\v\3\m\s\j\b\l\e\s\4\y\k\5\w\5\q\c\h\m\g\i\2\e\c\q\w\4\e\h\2\8\r\p\w\z\c\o\d\j\9\h\7\u\t\f\d\8\3\s\q\o\d\c\r\t\8\1\7\h\y\6\x\9\0\m\3\c\z\b\n\4\8\l\r\n\m\p\b\5\g\x\q\5\g\b\r\a\n\p\j\a\v\n\p\9\l\n\f\1\i\z\7\g\l\u\q\9\t\t\e\p\z\2\d\o\4\6\o\m\v\9\j\k\w\1\9\w\t\x\i\5\8\h\z\w\4\5\r\f\4\d\e\v\2\5\3\2\m\s\y\p\3\y\m\j\3\e\d\o\n\m\n\v\s\j\h\e\p\o\d\b\d\q\b\u\1\o\c\4\f\6\o\j\z\o\g\a\5\c\2\i\l\j\0\v\1\k\2\m\n\8\w\3\g\b\g\6\k\r\1\n\r\x\9\a\p\0\w\4\a\6\w\4\r\w\w\2\n\n\l\6\b\n\m\h\4\8\a\b\y\b\m\r\9\5\s\5\x\c\6\k\7\t\1\y\d\g\a\q\4\6\l\1\g\t\5\3\q\s\e\w\r\5\3\2\a\p\n\y\z\h\q\l\z\o\m\c\e\a\4\u\u\d\i\8\0\3\t\r\1\m\l\e\p\v\1\h\i\8\e\z\e\j\k\d\0\2\8\2\j\q\d\n\m\8\k\9\y\u\n\a\7\5\4\u\e\j\e\9\8\b\l\3\1\u\a\7\n\i\d\p\h\7\5\5\i\p\y\c\b\3\z\5\q\g\l\n\s\3\j\8\p\t\1\k\2\3\7\x\4\z\0 ]] 00:15:43.702 08:01:49 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:43.702 08:01:49 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:15:43.702 [2024-07-13 08:01:49.392107] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:43.702 [2024-07-13 08:01:49.392270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68851 ] 00:15:43.961 [2024-07-13 08:01:49.519578] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.961 [2024-07-13 08:01:49.568125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.227  Copying: 512/512 [B] (average 250 kBps) 00:15:44.227 00:15:44.227 ************************************ 00:15:44.227 END TEST dd_flags_misc 00:15:44.227 ************************************ 00:15:44.227 08:01:49 -- dd/posix.sh@93 -- # [[ 0n7xsoxj3x7jkdvc6ptv5mn684kzh3z6jhmzl7c9u7zge8ycaukrvv0i4cu0346smeucuohb4tfbny1nel8s2ulknvhfukyeyqzx34dufpah74paohxpv3jcs7ir94wfqqv3msjbles4yk5w5qchmgi2ecqw4eh28rpwzcodj9h7utfd83sqodcrt817hy6x90m3czbn48lrnmpb5gxq5gbranpjavnp9lnf1iz7gluq9ttepz2do46omv9jkw19wtxi58hzw45rf4dev2532msyp3ymj3edonmnvsjhepodbdqbu1oc4f6ojzoga5c2ilj0v1k2mn8w3gbg6kr1nrx9ap0w4a6w4rww2nnl6bnmh48abybmr95s5xc6k7t1ydgaq46l1gt53qsewr532apnyzhqlzomcea4uudi803tr1mlepv1hi8ezejkd0282jqdnm8k9yuna754ueje98bl31ua7nidph755ipycb3z5qglns3j8pt1k237x4z0 == \0\n\7\x\s\o\x\j\3\x\7\j\k\d\v\c\6\p\t\v\5\m\n\6\8\4\k\z\h\3\z\6\j\h\m\z\l\7\c\9\u\7\z\g\e\8\y\c\a\u\k\r\v\v\0\i\4\c\u\0\3\4\6\s\m\e\u\c\u\o\h\b\4\t\f\b\n\y\1\n\e\l\8\s\2\u\l\k\n\v\h\f\u\k\y\e\y\q\z\x\3\4\d\u\f\p\a\h\7\4\p\a\o\h\x\p\v\3\j\c\s\7\i\r\9\4\w\f\q\q\v\3\m\s\j\b\l\e\s\4\y\k\5\w\5\q\c\h\m\g\i\2\e\c\q\w\4\e\h\2\8\r\p\w\z\c\o\d\j\9\h\7\u\t\f\d\8\3\s\q\o\d\c\r\t\8\1\7\h\y\6\x\9\0\m\3\c\z\b\n\4\8\l\r\n\m\p\b\5\g\x\q\5\g\b\r\a\n\p\j\a\v\n\p\9\l\n\f\1\i\z\7\g\l\u\q\9\t\t\e\p\z\2\d\o\4\6\o\m\v\9\j\k\w\1\9\w\t\x\i\5\8\h\z\w\4\5\r\f\4\d\e\v\2\5\3\2\m\s\y\p\3\y\m\j\3\e\d\o\n\m\n\v\s\j\h\e\p\o\d\b\d\q\b\u\1\o\c\4\f\6\o\j\z\o\g\a\5\c\2\i\l\j\0\v\1\k\2\m\n\8\w\3\g\b\g\6\k\r\1\n\r\x\9\a\p\0\w\4\a\6\w\4\r\w\w\2\n\n\l\6\b\n\m\h\4\8\a\b\y\b\m\r\9\5\s\5\x\c\6\k\7\t\1\y\d\g\a\q\4\6\l\1\g\t\5\3\q\s\e\w\r\5\3\2\a\p\n\y\z\h\q\l\z\o\m\c\e\a\4\u\u\d\i\8\0\3\t\r\1\m\l\e\p\v\1\h\i\8\e\z\e\j\k\d\0\2\8\2\j\q\d\n\m\8\k\9\y\u\n\a\7\5\4\u\e\j\e\9\8\b\l\3\1\u\a\7\n\i\d\p\h\7\5\5\i\p\y\c\b\3\z\5\q\g\l\n\s\3\j\8\p\t\1\k\2\3\7\x\4\z\0 ]] 00:15:44.227 00:15:44.227 real 0m4.794s 00:15:44.227 user 0m1.821s 00:15:44.227 sys 0m1.344s 00:15:44.227 08:01:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:44.227 08:01:49 -- common/autotest_common.sh@10 -- # set +x 00:15:44.227 08:01:49 -- dd/posix.sh@131 -- # tests_forced_aio 00:15:44.227 08:01:49 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:15:44.227 * Second test run, using AIO 00:15:44.227 08:01:49 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:15:44.227 08:01:49 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:15:44.227 08:01:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:44.227 08:01:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:44.227 08:01:49 -- common/autotest_common.sh@10 -- # set +x 00:15:44.227 ************************************ 00:15:44.227 START TEST dd_flag_append_forced_aio 00:15:44.227 ************************************ 00:15:44.227 08:01:49 -- common/autotest_common.sh@1104 -- # append 00:15:44.227 08:01:49 -- dd/posix.sh@16 -- # local dump0 00:15:44.227 08:01:49 -- dd/posix.sh@17 -- # local dump1 00:15:44.227 08:01:49 -- dd/posix.sh@19 -- # gen_bytes 32 00:15:44.227 08:01:49 -- dd/common.sh@98 -- # xtrace_disable 00:15:44.227 08:01:49 -- common/autotest_common.sh@10 -- # set +x 00:15:44.227 08:01:49 -- dd/posix.sh@19 -- # dump0=slplc4qzq735pvfj3cb16sohdy0ov4bx 00:15:44.227 08:01:49 -- dd/posix.sh@20 -- # gen_bytes 32 00:15:44.227 08:01:49 -- dd/common.sh@98 -- # xtrace_disable 00:15:44.227 08:01:49 -- common/autotest_common.sh@10 -- # set +x 00:15:44.227 08:01:49 -- dd/posix.sh@20 -- # dump1=62dq9980t7ikxpdhe3ina4g62r4uaefc 00:15:44.227 08:01:49 -- dd/posix.sh@22 -- # printf %s slplc4qzq735pvfj3cb16sohdy0ov4bx 00:15:44.227 08:01:49 -- dd/posix.sh@23 -- # printf %s 62dq9980t7ikxpdhe3ina4g62r4uaefc 00:15:44.227 08:01:49 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:15:44.505 [2024-07-13 08:01:50.038890] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:44.505 [2024-07-13 08:01:50.039062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68877 ] 00:15:44.505 [2024-07-13 08:01:50.168306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.505 [2024-07-13 08:01:50.211826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.764  Copying: 32/32 [B] (average 31 kBps) 00:15:44.764 00:15:44.764 08:01:50 -- dd/posix.sh@27 -- # [[ 62dq9980t7ikxpdhe3ina4g62r4uaefcslplc4qzq735pvfj3cb16sohdy0ov4bx == \6\2\d\q\9\9\8\0\t\7\i\k\x\p\d\h\e\3\i\n\a\4\g\6\2\r\4\u\a\e\f\c\s\l\p\l\c\4\q\z\q\7\3\5\p\v\f\j\3\c\b\1\6\s\o\h\d\y\0\o\v\4\b\x ]] 00:15:44.764 00:15:44.764 real 0m0.585s 00:15:44.764 user 0m0.207s 00:15:44.764 sys 0m0.176s 00:15:44.764 08:01:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:44.764 08:01:50 -- common/autotest_common.sh@10 -- # set +x 00:15:44.764 ************************************ 00:15:44.764 END TEST dd_flag_append_forced_aio 00:15:44.764 ************************************ 00:15:44.764 08:01:50 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:15:44.764 08:01:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:44.764 08:01:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:44.764 08:01:50 -- common/autotest_common.sh@10 -- # set +x 00:15:44.764 ************************************ 00:15:44.764 START TEST dd_flag_directory_forced_aio 00:15:44.764 ************************************ 00:15:44.764 08:01:50 -- common/autotest_common.sh@1104 -- # directory 00:15:44.764 08:01:50 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:44.764 08:01:50 -- common/autotest_common.sh@640 -- # local es=0 00:15:44.764 08:01:50 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:44.764 08:01:50 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:44.764 08:01:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:44.764 08:01:50 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:44.764 08:01:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:44.764 08:01:50 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:44.764 08:01:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:44.764 08:01:50 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:44.764 08:01:50 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:44.765 08:01:50 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:45.023 [2024-07-13 08:01:50.675232] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:45.023 [2024-07-13 08:01:50.675406] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68918 ] 00:15:45.023 [2024-07-13 08:01:50.805765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.281 [2024-07-13 08:01:50.853606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.281 [2024-07-13 08:01:50.931716] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:45.281 [2024-07-13 08:01:50.931785] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:45.281 [2024-07-13 08:01:50.931810] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:45.281 [2024-07-13 08:01:51.033592] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:15:45.540 08:01:51 -- common/autotest_common.sh@643 -- # es=236 00:15:45.540 08:01:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:45.540 08:01:51 -- common/autotest_common.sh@652 -- # es=108 00:15:45.540 08:01:51 -- common/autotest_common.sh@653 -- # case "$es" in 00:15:45.540 08:01:51 -- common/autotest_common.sh@660 -- # es=1 00:15:45.540 08:01:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:45.540 08:01:51 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:15:45.541 08:01:51 -- common/autotest_common.sh@640 -- # local es=0 00:15:45.541 08:01:51 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:15:45.541 08:01:51 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:45.541 08:01:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:45.541 08:01:51 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:45.541 08:01:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:45.541 08:01:51 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:45.541 08:01:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:45.541 08:01:51 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:45.541 08:01:51 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:45.541 08:01:51 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:15:45.541 [2024-07-13 08:01:51.259210] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:45.541 [2024-07-13 08:01:51.259398] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68932 ] 00:15:45.799 [2024-07-13 08:01:51.390260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.799 [2024-07-13 08:01:51.437861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.800 [2024-07-13 08:01:51.516382] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:45.800 [2024-07-13 08:01:51.516450] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:45.800 [2024-07-13 08:01:51.516662] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:46.058 [2024-07-13 08:01:51.619848] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:15:46.059 08:01:51 -- common/autotest_common.sh@643 -- # es=236 00:15:46.059 08:01:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:46.059 08:01:51 -- common/autotest_common.sh@652 -- # es=108 00:15:46.059 08:01:51 -- common/autotest_common.sh@653 -- # case "$es" in 00:15:46.059 08:01:51 -- common/autotest_common.sh@660 -- # es=1 00:15:46.059 08:01:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:46.059 00:15:46.059 real 0m1.171s 00:15:46.059 user 0m0.457s 00:15:46.059 sys 0m0.321s 00:15:46.059 08:01:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:46.059 ************************************ 00:15:46.059 END TEST dd_flag_directory_forced_aio 00:15:46.059 ************************************ 00:15:46.059 08:01:51 -- common/autotest_common.sh@10 -- # set +x 00:15:46.059 08:01:51 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:15:46.059 08:01:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:46.059 08:01:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:46.059 08:01:51 -- common/autotest_common.sh@10 -- # set +x 00:15:46.059 ************************************ 00:15:46.059 START TEST dd_flag_nofollow_forced_aio 00:15:46.059 ************************************ 00:15:46.059 08:01:51 -- common/autotest_common.sh@1104 -- # nofollow 00:15:46.059 08:01:51 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:15:46.059 08:01:51 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:15:46.059 08:01:51 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:15:46.059 08:01:51 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:15:46.059 08:01:51 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:46.059 08:01:51 -- common/autotest_common.sh@640 -- # local es=0 00:15:46.059 08:01:51 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:46.059 08:01:51 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:46.059 08:01:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:46.059 08:01:51 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:46.059 08:01:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:46.059 08:01:51 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:46.059 08:01:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:46.059 08:01:51 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:46.059 08:01:51 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:46.059 08:01:51 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:46.318 [2024-07-13 08:01:51.913784] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:46.318 [2024-07-13 08:01:51.914037] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68965 ] 00:15:46.318 [2024-07-13 08:01:52.064735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.318 [2024-07-13 08:01:52.120210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.576 [2024-07-13 08:01:52.205240] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:15:46.576 [2024-07-13 08:01:52.205318] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:15:46.576 [2024-07-13 08:01:52.205347] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:46.576 [2024-07-13 08:01:52.308908] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:15:46.833 08:01:52 -- common/autotest_common.sh@643 -- # es=216 00:15:46.833 08:01:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:46.833 08:01:52 -- common/autotest_common.sh@652 -- # es=88 00:15:46.833 08:01:52 -- common/autotest_common.sh@653 -- # case "$es" in 00:15:46.833 08:01:52 -- common/autotest_common.sh@660 -- # es=1 00:15:46.833 08:01:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:46.833 08:01:52 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:15:46.833 08:01:52 -- common/autotest_common.sh@640 -- # local es=0 00:15:46.833 08:01:52 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:15:46.833 08:01:52 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:46.833 08:01:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:46.833 08:01:52 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:46.833 08:01:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:46.833 08:01:52 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:46.833 08:01:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:46.833 08:01:52 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:46.833 08:01:52 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:46.833 08:01:52 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:15:46.833 [2024-07-13 08:01:52.534401] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:46.833 [2024-07-13 08:01:52.534591] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68979 ] 00:15:47.090 [2024-07-13 08:01:52.668663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.090 [2024-07-13 08:01:52.717585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.090 [2024-07-13 08:01:52.796154] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:15:47.090 [2024-07-13 08:01:52.796222] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:15:47.090 [2024-07-13 08:01:52.796249] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:47.090 [2024-07-13 08:01:52.900097] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:15:47.349 08:01:52 -- common/autotest_common.sh@643 -- # es=216 00:15:47.349 08:01:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:47.349 08:01:52 -- common/autotest_common.sh@652 -- # es=88 00:15:47.349 08:01:52 -- common/autotest_common.sh@653 -- # case "$es" in 00:15:47.349 08:01:52 -- common/autotest_common.sh@660 -- # es=1 00:15:47.349 08:01:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:47.349 08:01:52 -- dd/posix.sh@46 -- # gen_bytes 512 00:15:47.349 08:01:52 -- dd/common.sh@98 -- # xtrace_disable 00:15:47.349 08:01:52 -- common/autotest_common.sh@10 -- # set +x 00:15:47.349 08:01:53 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:47.349 [2024-07-13 08:01:53.137852] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:47.349 [2024-07-13 08:01:53.138036] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68989 ] 00:15:47.608 [2024-07-13 08:01:53.271613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.608 [2024-07-13 08:01:53.319928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.867  Copying: 512/512 [B] (average 500 kBps) 00:15:47.867 00:15:47.867 ************************************ 00:15:47.867 END TEST dd_flag_nofollow_forced_aio 00:15:47.867 ************************************ 00:15:47.868 08:01:53 -- dd/posix.sh@49 -- # [[ gps55p0wn6ing58c5z5pc1f6qhmlrag7ozu5j100cub8yxc8jw3bjxk8m8tjy8p51e8mbllhd6pgovp2eidr6f8f3x2jveckvicrcxc735h3q4588ugi5154w7qxiqrrqv8slo9yqrjzx49ldcz26ppwnrs0eqjcecdqaecieckzln0hpg5a0ad90zabqts1vvd53ltdocc0val0qcz4w8ceub0sqit2gl2n41n7jrqoujn2w2j96rehd1um7w09vj6cqhqavatkdmw2dfvh83mu0wui5yyalj3ythfra6v82w8smplu0k6vwm21us6sqc7rs7ffz10auhstov2wiha7fs4tkwcyfj4j9njtrk84gqpvg8vlg96il345hke52iyokt5vrx3qig7n2ky514prsdvqa4xjd582lufsni4axqqdg8ajoaub063k9bjftw05l9tlo6c0h7ts60fgsze46lbwyik0qn596ls7emqru742lfr0xvcptlc0bwho == \g\p\s\5\5\p\0\w\n\6\i\n\g\5\8\c\5\z\5\p\c\1\f\6\q\h\m\l\r\a\g\7\o\z\u\5\j\1\0\0\c\u\b\8\y\x\c\8\j\w\3\b\j\x\k\8\m\8\t\j\y\8\p\5\1\e\8\m\b\l\l\h\d\6\p\g\o\v\p\2\e\i\d\r\6\f\8\f\3\x\2\j\v\e\c\k\v\i\c\r\c\x\c\7\3\5\h\3\q\4\5\8\8\u\g\i\5\1\5\4\w\7\q\x\i\q\r\r\q\v\8\s\l\o\9\y\q\r\j\z\x\4\9\l\d\c\z\2\6\p\p\w\n\r\s\0\e\q\j\c\e\c\d\q\a\e\c\i\e\c\k\z\l\n\0\h\p\g\5\a\0\a\d\9\0\z\a\b\q\t\s\1\v\v\d\5\3\l\t\d\o\c\c\0\v\a\l\0\q\c\z\4\w\8\c\e\u\b\0\s\q\i\t\2\g\l\2\n\4\1\n\7\j\r\q\o\u\j\n\2\w\2\j\9\6\r\e\h\d\1\u\m\7\w\0\9\v\j\6\c\q\h\q\a\v\a\t\k\d\m\w\2\d\f\v\h\8\3\m\u\0\w\u\i\5\y\y\a\l\j\3\y\t\h\f\r\a\6\v\8\2\w\8\s\m\p\l\u\0\k\6\v\w\m\2\1\u\s\6\s\q\c\7\r\s\7\f\f\z\1\0\a\u\h\s\t\o\v\2\w\i\h\a\7\f\s\4\t\k\w\c\y\f\j\4\j\9\n\j\t\r\k\8\4\g\q\p\v\g\8\v\l\g\9\6\i\l\3\4\5\h\k\e\5\2\i\y\o\k\t\5\v\r\x\3\q\i\g\7\n\2\k\y\5\1\4\p\r\s\d\v\q\a\4\x\j\d\5\8\2\l\u\f\s\n\i\4\a\x\q\q\d\g\8\a\j\o\a\u\b\0\6\3\k\9\b\j\f\t\w\0\5\l\9\t\l\o\6\c\0\h\7\t\s\6\0\f\g\s\z\e\4\6\l\b\w\y\i\k\0\q\n\5\9\6\l\s\7\e\m\q\r\u\7\4\2\l\f\r\0\x\v\c\p\t\l\c\0\b\w\h\o ]] 00:15:47.868 00:15:47.868 real 0m1.838s 00:15:47.868 user 0m0.694s 00:15:47.868 sys 0m0.548s 00:15:47.868 08:01:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:47.868 08:01:53 -- common/autotest_common.sh@10 -- # set +x 00:15:47.868 08:01:53 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:15:47.868 08:01:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:47.868 08:01:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:47.868 08:01:53 -- common/autotest_common.sh@10 -- # set +x 00:15:47.868 ************************************ 00:15:47.868 START TEST dd_flag_noatime_forced_aio 00:15:47.868 ************************************ 00:15:47.868 08:01:53 -- common/autotest_common.sh@1104 -- # noatime 00:15:47.868 08:01:53 -- dd/posix.sh@53 -- # local atime_if 00:15:47.868 08:01:53 -- dd/posix.sh@54 -- # local atime_of 00:15:47.868 08:01:53 -- dd/posix.sh@58 -- # gen_bytes 512 00:15:47.868 08:01:53 -- dd/common.sh@98 -- # xtrace_disable 00:15:47.868 08:01:53 -- common/autotest_common.sh@10 -- # set +x 00:15:47.868 08:01:53 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:47.868 08:01:53 -- dd/posix.sh@60 -- # atime_if=1720857713 00:15:47.868 08:01:53 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:48.127 08:01:53 -- dd/posix.sh@61 -- # atime_of=1720857713 00:15:48.127 08:01:53 -- dd/posix.sh@66 -- # sleep 1 00:15:49.065 08:01:54 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:49.065 [2024-07-13 08:01:54.815946] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:49.065 [2024-07-13 08:01:54.816124] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69046 ] 00:15:49.324 [2024-07-13 08:01:54.950168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.324 [2024-07-13 08:01:54.999124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.583  Copying: 512/512 [B] (average 500 kBps) 00:15:49.583 00:15:49.583 08:01:55 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:49.583 08:01:55 -- dd/posix.sh@69 -- # (( atime_if == 1720857713 )) 00:15:49.583 08:01:55 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:49.583 08:01:55 -- dd/posix.sh@70 -- # (( atime_of == 1720857713 )) 00:15:49.583 08:01:55 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:49.842 [2024-07-13 08:01:55.426923] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:49.842 [2024-07-13 08:01:55.427099] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69065 ] 00:15:49.842 [2024-07-13 08:01:55.557413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.842 [2024-07-13 08:01:55.606974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.101  Copying: 512/512 [B] (average 500 kBps) 00:15:50.101 00:15:50.101 08:01:55 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:50.101 ************************************ 00:15:50.101 END TEST dd_flag_noatime_forced_aio 00:15:50.102 ************************************ 00:15:50.102 08:01:55 -- dd/posix.sh@73 -- # (( atime_if < 1720857715 )) 00:15:50.102 00:15:50.102 real 0m2.230s 00:15:50.102 user 0m0.468s 00:15:50.102 sys 0m0.358s 00:15:50.102 08:01:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.102 08:01:55 -- common/autotest_common.sh@10 -- # set +x 00:15:50.361 08:01:55 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:15:50.361 08:01:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:50.361 08:01:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:50.361 08:01:55 -- common/autotest_common.sh@10 -- # set +x 00:15:50.361 ************************************ 00:15:50.361 START TEST dd_flags_misc_forced_aio 00:15:50.361 ************************************ 00:15:50.361 08:01:55 -- common/autotest_common.sh@1104 -- # io 00:15:50.361 08:01:55 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:15:50.361 08:01:55 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:15:50.361 08:01:55 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:15:50.361 08:01:55 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:15:50.361 08:01:55 -- dd/posix.sh@86 -- # gen_bytes 512 00:15:50.361 08:01:55 -- dd/common.sh@98 -- # xtrace_disable 00:15:50.361 08:01:55 -- common/autotest_common.sh@10 -- # set +x 00:15:50.361 08:01:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:50.361 08:01:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:15:50.361 [2024-07-13 08:01:56.083066] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:50.361 [2024-07-13 08:01:56.083277] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69089 ] 00:15:50.621 [2024-07-13 08:01:56.220448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.621 [2024-07-13 08:01:56.268974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.885  Copying: 512/512 [B] (average 500 kBps) 00:15:50.885 00:15:50.885 08:01:56 -- dd/posix.sh@93 -- # [[ cknc1yxkuuhwstne6d4lzb2zcehrf19ne26r48knim7ed2813s9o7wu6aplnxbp47zmi82gy9g45255rdgua2e1dgau3sozymepqhqwl6vzc1wjqu27mjwdf1qahz4zbfll7iyp3puakg5cxf5lfr238cuoqhebfldzihe7wdl7m8l99n8pkfox9czrlmosi4zw2mhmhmpb3rafdoepkoam3h4ps7lb2cpxya3s1f5dsrpo06xuvfh8923et5d13ai42pelzvdk6fdsdxfdcwsvxw4gr91qtktep355uuasip8jqnqq4qjacvr6y2l9k0u4e6nakm4r0tgrwj6zp8fzqolwf6ry31oezfx0jy3q9mnz88swy49lxqr2cziun0jhzuiaafgfjwnnpzk9aie5d5xsoh7yjwa5fdcqm6ayqm8cbyfpb1yyewpe1223jzukudchhczukmnvcxlecfbmjs36ysotcd459j4pxtblxl80letg67gh0yrqgvcqp == \c\k\n\c\1\y\x\k\u\u\h\w\s\t\n\e\6\d\4\l\z\b\2\z\c\e\h\r\f\1\9\n\e\2\6\r\4\8\k\n\i\m\7\e\d\2\8\1\3\s\9\o\7\w\u\6\a\p\l\n\x\b\p\4\7\z\m\i\8\2\g\y\9\g\4\5\2\5\5\r\d\g\u\a\2\e\1\d\g\a\u\3\s\o\z\y\m\e\p\q\h\q\w\l\6\v\z\c\1\w\j\q\u\2\7\m\j\w\d\f\1\q\a\h\z\4\z\b\f\l\l\7\i\y\p\3\p\u\a\k\g\5\c\x\f\5\l\f\r\2\3\8\c\u\o\q\h\e\b\f\l\d\z\i\h\e\7\w\d\l\7\m\8\l\9\9\n\8\p\k\f\o\x\9\c\z\r\l\m\o\s\i\4\z\w\2\m\h\m\h\m\p\b\3\r\a\f\d\o\e\p\k\o\a\m\3\h\4\p\s\7\l\b\2\c\p\x\y\a\3\s\1\f\5\d\s\r\p\o\0\6\x\u\v\f\h\8\9\2\3\e\t\5\d\1\3\a\i\4\2\p\e\l\z\v\d\k\6\f\d\s\d\x\f\d\c\w\s\v\x\w\4\g\r\9\1\q\t\k\t\e\p\3\5\5\u\u\a\s\i\p\8\j\q\n\q\q\4\q\j\a\c\v\r\6\y\2\l\9\k\0\u\4\e\6\n\a\k\m\4\r\0\t\g\r\w\j\6\z\p\8\f\z\q\o\l\w\f\6\r\y\3\1\o\e\z\f\x\0\j\y\3\q\9\m\n\z\8\8\s\w\y\4\9\l\x\q\r\2\c\z\i\u\n\0\j\h\z\u\i\a\a\f\g\f\j\w\n\n\p\z\k\9\a\i\e\5\d\5\x\s\o\h\7\y\j\w\a\5\f\d\c\q\m\6\a\y\q\m\8\c\b\y\f\p\b\1\y\y\e\w\p\e\1\2\2\3\j\z\u\k\u\d\c\h\h\c\z\u\k\m\n\v\c\x\l\e\c\f\b\m\j\s\3\6\y\s\o\t\c\d\4\5\9\j\4\p\x\t\b\l\x\l\8\0\l\e\t\g\6\7\g\h\0\y\r\q\g\v\c\q\p ]] 00:15:50.885 08:01:56 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:50.885 08:01:56 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:15:50.885 [2024-07-13 08:01:56.683006] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:50.885 [2024-07-13 08:01:56.683180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69110 ] 00:15:51.147 [2024-07-13 08:01:56.811000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.147 [2024-07-13 08:01:56.860079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.406  Copying: 512/512 [B] (average 500 kBps) 00:15:51.406 00:15:51.406 08:01:57 -- dd/posix.sh@93 -- # [[ cknc1yxkuuhwstne6d4lzb2zcehrf19ne26r48knim7ed2813s9o7wu6aplnxbp47zmi82gy9g45255rdgua2e1dgau3sozymepqhqwl6vzc1wjqu27mjwdf1qahz4zbfll7iyp3puakg5cxf5lfr238cuoqhebfldzihe7wdl7m8l99n8pkfox9czrlmosi4zw2mhmhmpb3rafdoepkoam3h4ps7lb2cpxya3s1f5dsrpo06xuvfh8923et5d13ai42pelzvdk6fdsdxfdcwsvxw4gr91qtktep355uuasip8jqnqq4qjacvr6y2l9k0u4e6nakm4r0tgrwj6zp8fzqolwf6ry31oezfx0jy3q9mnz88swy49lxqr2cziun0jhzuiaafgfjwnnpzk9aie5d5xsoh7yjwa5fdcqm6ayqm8cbyfpb1yyewpe1223jzukudchhczukmnvcxlecfbmjs36ysotcd459j4pxtblxl80letg67gh0yrqgvcqp == \c\k\n\c\1\y\x\k\u\u\h\w\s\t\n\e\6\d\4\l\z\b\2\z\c\e\h\r\f\1\9\n\e\2\6\r\4\8\k\n\i\m\7\e\d\2\8\1\3\s\9\o\7\w\u\6\a\p\l\n\x\b\p\4\7\z\m\i\8\2\g\y\9\g\4\5\2\5\5\r\d\g\u\a\2\e\1\d\g\a\u\3\s\o\z\y\m\e\p\q\h\q\w\l\6\v\z\c\1\w\j\q\u\2\7\m\j\w\d\f\1\q\a\h\z\4\z\b\f\l\l\7\i\y\p\3\p\u\a\k\g\5\c\x\f\5\l\f\r\2\3\8\c\u\o\q\h\e\b\f\l\d\z\i\h\e\7\w\d\l\7\m\8\l\9\9\n\8\p\k\f\o\x\9\c\z\r\l\m\o\s\i\4\z\w\2\m\h\m\h\m\p\b\3\r\a\f\d\o\e\p\k\o\a\m\3\h\4\p\s\7\l\b\2\c\p\x\y\a\3\s\1\f\5\d\s\r\p\o\0\6\x\u\v\f\h\8\9\2\3\e\t\5\d\1\3\a\i\4\2\p\e\l\z\v\d\k\6\f\d\s\d\x\f\d\c\w\s\v\x\w\4\g\r\9\1\q\t\k\t\e\p\3\5\5\u\u\a\s\i\p\8\j\q\n\q\q\4\q\j\a\c\v\r\6\y\2\l\9\k\0\u\4\e\6\n\a\k\m\4\r\0\t\g\r\w\j\6\z\p\8\f\z\q\o\l\w\f\6\r\y\3\1\o\e\z\f\x\0\j\y\3\q\9\m\n\z\8\8\s\w\y\4\9\l\x\q\r\2\c\z\i\u\n\0\j\h\z\u\i\a\a\f\g\f\j\w\n\n\p\z\k\9\a\i\e\5\d\5\x\s\o\h\7\y\j\w\a\5\f\d\c\q\m\6\a\y\q\m\8\c\b\y\f\p\b\1\y\y\e\w\p\e\1\2\2\3\j\z\u\k\u\d\c\h\h\c\z\u\k\m\n\v\c\x\l\e\c\f\b\m\j\s\3\6\y\s\o\t\c\d\4\5\9\j\4\p\x\t\b\l\x\l\8\0\l\e\t\g\6\7\g\h\0\y\r\q\g\v\c\q\p ]] 00:15:51.406 08:01:57 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:51.406 08:01:57 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:15:51.665 [2024-07-13 08:01:57.276940] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:51.665 [2024-07-13 08:01:57.277169] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69115 ] 00:15:51.665 [2024-07-13 08:01:57.422389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.665 [2024-07-13 08:01:57.471476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.182  Copying: 512/512 [B] (average 125 kBps) 00:15:52.182 00:15:52.182 08:01:57 -- dd/posix.sh@93 -- # [[ cknc1yxkuuhwstne6d4lzb2zcehrf19ne26r48knim7ed2813s9o7wu6aplnxbp47zmi82gy9g45255rdgua2e1dgau3sozymepqhqwl6vzc1wjqu27mjwdf1qahz4zbfll7iyp3puakg5cxf5lfr238cuoqhebfldzihe7wdl7m8l99n8pkfox9czrlmosi4zw2mhmhmpb3rafdoepkoam3h4ps7lb2cpxya3s1f5dsrpo06xuvfh8923et5d13ai42pelzvdk6fdsdxfdcwsvxw4gr91qtktep355uuasip8jqnqq4qjacvr6y2l9k0u4e6nakm4r0tgrwj6zp8fzqolwf6ry31oezfx0jy3q9mnz88swy49lxqr2cziun0jhzuiaafgfjwnnpzk9aie5d5xsoh7yjwa5fdcqm6ayqm8cbyfpb1yyewpe1223jzukudchhczukmnvcxlecfbmjs36ysotcd459j4pxtblxl80letg67gh0yrqgvcqp == \c\k\n\c\1\y\x\k\u\u\h\w\s\t\n\e\6\d\4\l\z\b\2\z\c\e\h\r\f\1\9\n\e\2\6\r\4\8\k\n\i\m\7\e\d\2\8\1\3\s\9\o\7\w\u\6\a\p\l\n\x\b\p\4\7\z\m\i\8\2\g\y\9\g\4\5\2\5\5\r\d\g\u\a\2\e\1\d\g\a\u\3\s\o\z\y\m\e\p\q\h\q\w\l\6\v\z\c\1\w\j\q\u\2\7\m\j\w\d\f\1\q\a\h\z\4\z\b\f\l\l\7\i\y\p\3\p\u\a\k\g\5\c\x\f\5\l\f\r\2\3\8\c\u\o\q\h\e\b\f\l\d\z\i\h\e\7\w\d\l\7\m\8\l\9\9\n\8\p\k\f\o\x\9\c\z\r\l\m\o\s\i\4\z\w\2\m\h\m\h\m\p\b\3\r\a\f\d\o\e\p\k\o\a\m\3\h\4\p\s\7\l\b\2\c\p\x\y\a\3\s\1\f\5\d\s\r\p\o\0\6\x\u\v\f\h\8\9\2\3\e\t\5\d\1\3\a\i\4\2\p\e\l\z\v\d\k\6\f\d\s\d\x\f\d\c\w\s\v\x\w\4\g\r\9\1\q\t\k\t\e\p\3\5\5\u\u\a\s\i\p\8\j\q\n\q\q\4\q\j\a\c\v\r\6\y\2\l\9\k\0\u\4\e\6\n\a\k\m\4\r\0\t\g\r\w\j\6\z\p\8\f\z\q\o\l\w\f\6\r\y\3\1\o\e\z\f\x\0\j\y\3\q\9\m\n\z\8\8\s\w\y\4\9\l\x\q\r\2\c\z\i\u\n\0\j\h\z\u\i\a\a\f\g\f\j\w\n\n\p\z\k\9\a\i\e\5\d\5\x\s\o\h\7\y\j\w\a\5\f\d\c\q\m\6\a\y\q\m\8\c\b\y\f\p\b\1\y\y\e\w\p\e\1\2\2\3\j\z\u\k\u\d\c\h\h\c\z\u\k\m\n\v\c\x\l\e\c\f\b\m\j\s\3\6\y\s\o\t\c\d\4\5\9\j\4\p\x\t\b\l\x\l\8\0\l\e\t\g\6\7\g\h\0\y\r\q\g\v\c\q\p ]] 00:15:52.182 08:01:57 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:52.182 08:01:57 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:15:52.182 [2024-07-13 08:01:57.893297] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:52.182 [2024-07-13 08:01:57.893503] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69132 ] 00:15:52.441 [2024-07-13 08:01:58.023557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.441 [2024-07-13 08:01:58.072641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.700  Copying: 512/512 [B] (average 250 kBps) 00:15:52.700 00:15:52.700 08:01:58 -- dd/posix.sh@93 -- # [[ cknc1yxkuuhwstne6d4lzb2zcehrf19ne26r48knim7ed2813s9o7wu6aplnxbp47zmi82gy9g45255rdgua2e1dgau3sozymepqhqwl6vzc1wjqu27mjwdf1qahz4zbfll7iyp3puakg5cxf5lfr238cuoqhebfldzihe7wdl7m8l99n8pkfox9czrlmosi4zw2mhmhmpb3rafdoepkoam3h4ps7lb2cpxya3s1f5dsrpo06xuvfh8923et5d13ai42pelzvdk6fdsdxfdcwsvxw4gr91qtktep355uuasip8jqnqq4qjacvr6y2l9k0u4e6nakm4r0tgrwj6zp8fzqolwf6ry31oezfx0jy3q9mnz88swy49lxqr2cziun0jhzuiaafgfjwnnpzk9aie5d5xsoh7yjwa5fdcqm6ayqm8cbyfpb1yyewpe1223jzukudchhczukmnvcxlecfbmjs36ysotcd459j4pxtblxl80letg67gh0yrqgvcqp == \c\k\n\c\1\y\x\k\u\u\h\w\s\t\n\e\6\d\4\l\z\b\2\z\c\e\h\r\f\1\9\n\e\2\6\r\4\8\k\n\i\m\7\e\d\2\8\1\3\s\9\o\7\w\u\6\a\p\l\n\x\b\p\4\7\z\m\i\8\2\g\y\9\g\4\5\2\5\5\r\d\g\u\a\2\e\1\d\g\a\u\3\s\o\z\y\m\e\p\q\h\q\w\l\6\v\z\c\1\w\j\q\u\2\7\m\j\w\d\f\1\q\a\h\z\4\z\b\f\l\l\7\i\y\p\3\p\u\a\k\g\5\c\x\f\5\l\f\r\2\3\8\c\u\o\q\h\e\b\f\l\d\z\i\h\e\7\w\d\l\7\m\8\l\9\9\n\8\p\k\f\o\x\9\c\z\r\l\m\o\s\i\4\z\w\2\m\h\m\h\m\p\b\3\r\a\f\d\o\e\p\k\o\a\m\3\h\4\p\s\7\l\b\2\c\p\x\y\a\3\s\1\f\5\d\s\r\p\o\0\6\x\u\v\f\h\8\9\2\3\e\t\5\d\1\3\a\i\4\2\p\e\l\z\v\d\k\6\f\d\s\d\x\f\d\c\w\s\v\x\w\4\g\r\9\1\q\t\k\t\e\p\3\5\5\u\u\a\s\i\p\8\j\q\n\q\q\4\q\j\a\c\v\r\6\y\2\l\9\k\0\u\4\e\6\n\a\k\m\4\r\0\t\g\r\w\j\6\z\p\8\f\z\q\o\l\w\f\6\r\y\3\1\o\e\z\f\x\0\j\y\3\q\9\m\n\z\8\8\s\w\y\4\9\l\x\q\r\2\c\z\i\u\n\0\j\h\z\u\i\a\a\f\g\f\j\w\n\n\p\z\k\9\a\i\e\5\d\5\x\s\o\h\7\y\j\w\a\5\f\d\c\q\m\6\a\y\q\m\8\c\b\y\f\p\b\1\y\y\e\w\p\e\1\2\2\3\j\z\u\k\u\d\c\h\h\c\z\u\k\m\n\v\c\x\l\e\c\f\b\m\j\s\3\6\y\s\o\t\c\d\4\5\9\j\4\p\x\t\b\l\x\l\8\0\l\e\t\g\6\7\g\h\0\y\r\q\g\v\c\q\p ]] 00:15:52.700 08:01:58 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:15:52.700 08:01:58 -- dd/posix.sh@86 -- # gen_bytes 512 00:15:52.700 08:01:58 -- dd/common.sh@98 -- # xtrace_disable 00:15:52.700 08:01:58 -- common/autotest_common.sh@10 -- # set +x 00:15:52.700 08:01:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:52.700 08:01:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:15:52.700 [2024-07-13 08:01:58.500363] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:52.700 [2024-07-13 08:01:58.500767] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69144 ] 00:15:52.959 [2024-07-13 08:01:58.635403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.959 [2024-07-13 08:01:58.684092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.219  Copying: 512/512 [B] (average 500 kBps) 00:15:53.219 00:15:53.219 08:01:58 -- dd/posix.sh@93 -- # [[ twn84g5gzb83caq9pxo2iu6xoujgzcjoy0emaeqwchzb327qvi6c4np1m4w1byzs25mqpf2dxfa71fhp56agzy4zw5785cy3pxeul1ur4j8o7yx1csqlp34mx0vov82etwiv57s9sej4l0zihg7b1ecegbuz6lh5il3koitxftwxggdzqnesesepou5di3wfnv2o4lr29gusloxn55uciy12ea22x1jibcr1ijixg4rp9jdh8xtlizc1wp78nkaq1el6n77f7iii7vmnn865mrwqwtsmvsv3tk1lg2ulj4r4sbuk3y63s6rrs9cwnaua0dx9vbn39jhhfithutijgfd3nzxmqfo3oe9pj4npwbcl88dy465gucrvzwb08cu20v52kryayb9am8bjol0aevaf1yz5zzl44h40as2w5xbsqe1cxft1xa83608t4monkbu3liaw29ahzu0cvuqakq6jx5q8vi8v9m1074xupsk75fsgtv9i33r9wrhfbffn == \t\w\n\8\4\g\5\g\z\b\8\3\c\a\q\9\p\x\o\2\i\u\6\x\o\u\j\g\z\c\j\o\y\0\e\m\a\e\q\w\c\h\z\b\3\2\7\q\v\i\6\c\4\n\p\1\m\4\w\1\b\y\z\s\2\5\m\q\p\f\2\d\x\f\a\7\1\f\h\p\5\6\a\g\z\y\4\z\w\5\7\8\5\c\y\3\p\x\e\u\l\1\u\r\4\j\8\o\7\y\x\1\c\s\q\l\p\3\4\m\x\0\v\o\v\8\2\e\t\w\i\v\5\7\s\9\s\e\j\4\l\0\z\i\h\g\7\b\1\e\c\e\g\b\u\z\6\l\h\5\i\l\3\k\o\i\t\x\f\t\w\x\g\g\d\z\q\n\e\s\e\s\e\p\o\u\5\d\i\3\w\f\n\v\2\o\4\l\r\2\9\g\u\s\l\o\x\n\5\5\u\c\i\y\1\2\e\a\2\2\x\1\j\i\b\c\r\1\i\j\i\x\g\4\r\p\9\j\d\h\8\x\t\l\i\z\c\1\w\p\7\8\n\k\a\q\1\e\l\6\n\7\7\f\7\i\i\i\7\v\m\n\n\8\6\5\m\r\w\q\w\t\s\m\v\s\v\3\t\k\1\l\g\2\u\l\j\4\r\4\s\b\u\k\3\y\6\3\s\6\r\r\s\9\c\w\n\a\u\a\0\d\x\9\v\b\n\3\9\j\h\h\f\i\t\h\u\t\i\j\g\f\d\3\n\z\x\m\q\f\o\3\o\e\9\p\j\4\n\p\w\b\c\l\8\8\d\y\4\6\5\g\u\c\r\v\z\w\b\0\8\c\u\2\0\v\5\2\k\r\y\a\y\b\9\a\m\8\b\j\o\l\0\a\e\v\a\f\1\y\z\5\z\z\l\4\4\h\4\0\a\s\2\w\5\x\b\s\q\e\1\c\x\f\t\1\x\a\8\3\6\0\8\t\4\m\o\n\k\b\u\3\l\i\a\w\2\9\a\h\z\u\0\c\v\u\q\a\k\q\6\j\x\5\q\8\v\i\8\v\9\m\1\0\7\4\x\u\p\s\k\7\5\f\s\g\t\v\9\i\3\3\r\9\w\r\h\f\b\f\f\n ]] 00:15:53.219 08:01:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:53.219 08:01:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:15:53.477 [2024-07-13 08:01:59.094576] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:53.477 [2024-07-13 08:01:59.094748] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69154 ] 00:15:53.477 [2024-07-13 08:01:59.227933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.477 [2024-07-13 08:01:59.276520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.735  Copying: 512/512 [B] (average 500 kBps) 00:15:53.735 00:15:53.994 08:01:59 -- dd/posix.sh@93 -- # [[ twn84g5gzb83caq9pxo2iu6xoujgzcjoy0emaeqwchzb327qvi6c4np1m4w1byzs25mqpf2dxfa71fhp56agzy4zw5785cy3pxeul1ur4j8o7yx1csqlp34mx0vov82etwiv57s9sej4l0zihg7b1ecegbuz6lh5il3koitxftwxggdzqnesesepou5di3wfnv2o4lr29gusloxn55uciy12ea22x1jibcr1ijixg4rp9jdh8xtlizc1wp78nkaq1el6n77f7iii7vmnn865mrwqwtsmvsv3tk1lg2ulj4r4sbuk3y63s6rrs9cwnaua0dx9vbn39jhhfithutijgfd3nzxmqfo3oe9pj4npwbcl88dy465gucrvzwb08cu20v52kryayb9am8bjol0aevaf1yz5zzl44h40as2w5xbsqe1cxft1xa83608t4monkbu3liaw29ahzu0cvuqakq6jx5q8vi8v9m1074xupsk75fsgtv9i33r9wrhfbffn == \t\w\n\8\4\g\5\g\z\b\8\3\c\a\q\9\p\x\o\2\i\u\6\x\o\u\j\g\z\c\j\o\y\0\e\m\a\e\q\w\c\h\z\b\3\2\7\q\v\i\6\c\4\n\p\1\m\4\w\1\b\y\z\s\2\5\m\q\p\f\2\d\x\f\a\7\1\f\h\p\5\6\a\g\z\y\4\z\w\5\7\8\5\c\y\3\p\x\e\u\l\1\u\r\4\j\8\o\7\y\x\1\c\s\q\l\p\3\4\m\x\0\v\o\v\8\2\e\t\w\i\v\5\7\s\9\s\e\j\4\l\0\z\i\h\g\7\b\1\e\c\e\g\b\u\z\6\l\h\5\i\l\3\k\o\i\t\x\f\t\w\x\g\g\d\z\q\n\e\s\e\s\e\p\o\u\5\d\i\3\w\f\n\v\2\o\4\l\r\2\9\g\u\s\l\o\x\n\5\5\u\c\i\y\1\2\e\a\2\2\x\1\j\i\b\c\r\1\i\j\i\x\g\4\r\p\9\j\d\h\8\x\t\l\i\z\c\1\w\p\7\8\n\k\a\q\1\e\l\6\n\7\7\f\7\i\i\i\7\v\m\n\n\8\6\5\m\r\w\q\w\t\s\m\v\s\v\3\t\k\1\l\g\2\u\l\j\4\r\4\s\b\u\k\3\y\6\3\s\6\r\r\s\9\c\w\n\a\u\a\0\d\x\9\v\b\n\3\9\j\h\h\f\i\t\h\u\t\i\j\g\f\d\3\n\z\x\m\q\f\o\3\o\e\9\p\j\4\n\p\w\b\c\l\8\8\d\y\4\6\5\g\u\c\r\v\z\w\b\0\8\c\u\2\0\v\5\2\k\r\y\a\y\b\9\a\m\8\b\j\o\l\0\a\e\v\a\f\1\y\z\5\z\z\l\4\4\h\4\0\a\s\2\w\5\x\b\s\q\e\1\c\x\f\t\1\x\a\8\3\6\0\8\t\4\m\o\n\k\b\u\3\l\i\a\w\2\9\a\h\z\u\0\c\v\u\q\a\k\q\6\j\x\5\q\8\v\i\8\v\9\m\1\0\7\4\x\u\p\s\k\7\5\f\s\g\t\v\9\i\3\3\r\9\w\r\h\f\b\f\f\n ]] 00:15:53.994 08:01:59 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:53.994 08:01:59 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:15:53.994 [2024-07-13 08:01:59.688427] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:53.994 [2024-07-13 08:01:59.688616] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69166 ] 00:15:54.252 [2024-07-13 08:01:59.823171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.252 [2024-07-13 08:01:59.872447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.537  Copying: 512/512 [B] (average 250 kBps) 00:15:54.537 00:15:54.537 08:02:00 -- dd/posix.sh@93 -- # [[ twn84g5gzb83caq9pxo2iu6xoujgzcjoy0emaeqwchzb327qvi6c4np1m4w1byzs25mqpf2dxfa71fhp56agzy4zw5785cy3pxeul1ur4j8o7yx1csqlp34mx0vov82etwiv57s9sej4l0zihg7b1ecegbuz6lh5il3koitxftwxggdzqnesesepou5di3wfnv2o4lr29gusloxn55uciy12ea22x1jibcr1ijixg4rp9jdh8xtlizc1wp78nkaq1el6n77f7iii7vmnn865mrwqwtsmvsv3tk1lg2ulj4r4sbuk3y63s6rrs9cwnaua0dx9vbn39jhhfithutijgfd3nzxmqfo3oe9pj4npwbcl88dy465gucrvzwb08cu20v52kryayb9am8bjol0aevaf1yz5zzl44h40as2w5xbsqe1cxft1xa83608t4monkbu3liaw29ahzu0cvuqakq6jx5q8vi8v9m1074xupsk75fsgtv9i33r9wrhfbffn == \t\w\n\8\4\g\5\g\z\b\8\3\c\a\q\9\p\x\o\2\i\u\6\x\o\u\j\g\z\c\j\o\y\0\e\m\a\e\q\w\c\h\z\b\3\2\7\q\v\i\6\c\4\n\p\1\m\4\w\1\b\y\z\s\2\5\m\q\p\f\2\d\x\f\a\7\1\f\h\p\5\6\a\g\z\y\4\z\w\5\7\8\5\c\y\3\p\x\e\u\l\1\u\r\4\j\8\o\7\y\x\1\c\s\q\l\p\3\4\m\x\0\v\o\v\8\2\e\t\w\i\v\5\7\s\9\s\e\j\4\l\0\z\i\h\g\7\b\1\e\c\e\g\b\u\z\6\l\h\5\i\l\3\k\o\i\t\x\f\t\w\x\g\g\d\z\q\n\e\s\e\s\e\p\o\u\5\d\i\3\w\f\n\v\2\o\4\l\r\2\9\g\u\s\l\o\x\n\5\5\u\c\i\y\1\2\e\a\2\2\x\1\j\i\b\c\r\1\i\j\i\x\g\4\r\p\9\j\d\h\8\x\t\l\i\z\c\1\w\p\7\8\n\k\a\q\1\e\l\6\n\7\7\f\7\i\i\i\7\v\m\n\n\8\6\5\m\r\w\q\w\t\s\m\v\s\v\3\t\k\1\l\g\2\u\l\j\4\r\4\s\b\u\k\3\y\6\3\s\6\r\r\s\9\c\w\n\a\u\a\0\d\x\9\v\b\n\3\9\j\h\h\f\i\t\h\u\t\i\j\g\f\d\3\n\z\x\m\q\f\o\3\o\e\9\p\j\4\n\p\w\b\c\l\8\8\d\y\4\6\5\g\u\c\r\v\z\w\b\0\8\c\u\2\0\v\5\2\k\r\y\a\y\b\9\a\m\8\b\j\o\l\0\a\e\v\a\f\1\y\z\5\z\z\l\4\4\h\4\0\a\s\2\w\5\x\b\s\q\e\1\c\x\f\t\1\x\a\8\3\6\0\8\t\4\m\o\n\k\b\u\3\l\i\a\w\2\9\a\h\z\u\0\c\v\u\q\a\k\q\6\j\x\5\q\8\v\i\8\v\9\m\1\0\7\4\x\u\p\s\k\7\5\f\s\g\t\v\9\i\3\3\r\9\w\r\h\f\b\f\f\n ]] 00:15:54.537 08:02:00 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:54.537 08:02:00 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:15:54.537 [2024-07-13 08:02:00.282882] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:54.537 [2024-07-13 08:02:00.283058] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69176 ] 00:15:54.823 [2024-07-13 08:02:00.416041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.823 [2024-07-13 08:02:00.465058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.094  Copying: 512/512 [B] (average 83 kBps) 00:15:55.094 00:15:55.095 08:02:00 -- dd/posix.sh@93 -- # [[ twn84g5gzb83caq9pxo2iu6xoujgzcjoy0emaeqwchzb327qvi6c4np1m4w1byzs25mqpf2dxfa71fhp56agzy4zw5785cy3pxeul1ur4j8o7yx1csqlp34mx0vov82etwiv57s9sej4l0zihg7b1ecegbuz6lh5il3koitxftwxggdzqnesesepou5di3wfnv2o4lr29gusloxn55uciy12ea22x1jibcr1ijixg4rp9jdh8xtlizc1wp78nkaq1el6n77f7iii7vmnn865mrwqwtsmvsv3tk1lg2ulj4r4sbuk3y63s6rrs9cwnaua0dx9vbn39jhhfithutijgfd3nzxmqfo3oe9pj4npwbcl88dy465gucrvzwb08cu20v52kryayb9am8bjol0aevaf1yz5zzl44h40as2w5xbsqe1cxft1xa83608t4monkbu3liaw29ahzu0cvuqakq6jx5q8vi8v9m1074xupsk75fsgtv9i33r9wrhfbffn == \t\w\n\8\4\g\5\g\z\b\8\3\c\a\q\9\p\x\o\2\i\u\6\x\o\u\j\g\z\c\j\o\y\0\e\m\a\e\q\w\c\h\z\b\3\2\7\q\v\i\6\c\4\n\p\1\m\4\w\1\b\y\z\s\2\5\m\q\p\f\2\d\x\f\a\7\1\f\h\p\5\6\a\g\z\y\4\z\w\5\7\8\5\c\y\3\p\x\e\u\l\1\u\r\4\j\8\o\7\y\x\1\c\s\q\l\p\3\4\m\x\0\v\o\v\8\2\e\t\w\i\v\5\7\s\9\s\e\j\4\l\0\z\i\h\g\7\b\1\e\c\e\g\b\u\z\6\l\h\5\i\l\3\k\o\i\t\x\f\t\w\x\g\g\d\z\q\n\e\s\e\s\e\p\o\u\5\d\i\3\w\f\n\v\2\o\4\l\r\2\9\g\u\s\l\o\x\n\5\5\u\c\i\y\1\2\e\a\2\2\x\1\j\i\b\c\r\1\i\j\i\x\g\4\r\p\9\j\d\h\8\x\t\l\i\z\c\1\w\p\7\8\n\k\a\q\1\e\l\6\n\7\7\f\7\i\i\i\7\v\m\n\n\8\6\5\m\r\w\q\w\t\s\m\v\s\v\3\t\k\1\l\g\2\u\l\j\4\r\4\s\b\u\k\3\y\6\3\s\6\r\r\s\9\c\w\n\a\u\a\0\d\x\9\v\b\n\3\9\j\h\h\f\i\t\h\u\t\i\j\g\f\d\3\n\z\x\m\q\f\o\3\o\e\9\p\j\4\n\p\w\b\c\l\8\8\d\y\4\6\5\g\u\c\r\v\z\w\b\0\8\c\u\2\0\v\5\2\k\r\y\a\y\b\9\a\m\8\b\j\o\l\0\a\e\v\a\f\1\y\z\5\z\z\l\4\4\h\4\0\a\s\2\w\5\x\b\s\q\e\1\c\x\f\t\1\x\a\8\3\6\0\8\t\4\m\o\n\k\b\u\3\l\i\a\w\2\9\a\h\z\u\0\c\v\u\q\a\k\q\6\j\x\5\q\8\v\i\8\v\9\m\1\0\7\4\x\u\p\s\k\7\5\f\s\g\t\v\9\i\3\3\r\9\w\r\h\f\b\f\f\n ]] 00:15:55.095 00:15:55.095 real 0m4.814s 00:15:55.095 user 0m1.853s 00:15:55.095 sys 0m1.324s 00:15:55.095 08:02:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:55.095 ************************************ 00:15:55.095 END TEST dd_flags_misc_forced_aio 00:15:55.095 ************************************ 00:15:55.095 08:02:00 -- common/autotest_common.sh@10 -- # set +x 00:15:55.095 08:02:00 -- dd/posix.sh@1 -- # cleanup 00:15:55.095 08:02:00 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:15:55.095 08:02:00 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:15:55.095 ************************************ 00:15:55.095 END TEST spdk_dd_posix 00:15:55.095 ************************************ 00:15:55.095 00:15:55.095 real 0m21.923s 00:15:55.095 user 0m7.615s 00:15:55.095 sys 0m5.817s 00:15:55.095 08:02:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:55.095 08:02:00 -- common/autotest_common.sh@10 -- # set +x 00:15:55.095 08:02:00 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:15:55.095 08:02:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:55.095 08:02:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:55.095 08:02:00 -- common/autotest_common.sh@10 -- # set +x 00:15:55.095 ************************************ 00:15:55.095 START TEST spdk_dd_malloc 00:15:55.095 ************************************ 00:15:55.095 08:02:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:15:55.354 * Looking for test storage... 00:15:55.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:15:55.354 08:02:00 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:55.354 08:02:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.354 08:02:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.354 08:02:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.354 08:02:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:55.354 08:02:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:55.354 08:02:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:55.354 08:02:00 -- paths/export.sh@5 -- # export PATH 00:15:55.354 08:02:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:55.354 08:02:00 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:15:55.354 08:02:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:55.354 08:02:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:55.354 08:02:00 -- common/autotest_common.sh@10 -- # set +x 00:15:55.354 ************************************ 00:15:55.354 START TEST dd_malloc_copy 00:15:55.354 ************************************ 00:15:55.354 08:02:00 -- common/autotest_common.sh@1104 -- # malloc_copy 00:15:55.354 08:02:00 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:15:55.354 08:02:00 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:15:55.354 08:02:00 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(["name"]=$mbdev0 ["num_blocks"]=$mbdev0_b ["block_size"]=$mbdev0_bs) 00:15:55.354 08:02:00 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:15:55.354 08:02:00 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(["name"]=$mbdev1 ["num_blocks"]=$mbdev1_b ["block_size"]=$mbdev1_bs) 00:15:55.354 08:02:00 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:15:55.354 08:02:00 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:15:55.354 08:02:00 -- dd/malloc.sh@28 -- # gen_conf 00:15:55.354 08:02:00 -- dd/common.sh@31 -- # xtrace_disable 00:15:55.354 08:02:00 -- common/autotest_common.sh@10 -- # set +x 00:15:55.354 { 00:15:55.354 "subsystems": [ 00:15:55.354 { 00:15:55.354 "subsystem": "bdev", 00:15:55.354 "config": [ 00:15:55.354 { 00:15:55.354 "params": { 00:15:55.354 "block_size": 512, 00:15:55.354 "name": "malloc0", 00:15:55.354 "num_blocks": 1048576 00:15:55.354 }, 00:15:55.354 "method": "bdev_malloc_create" 00:15:55.354 }, 00:15:55.354 { 00:15:55.354 "params": { 00:15:55.354 "block_size": 512, 00:15:55.354 "name": "malloc1", 00:15:55.354 "num_blocks": 1048576 00:15:55.354 }, 00:15:55.354 "method": "bdev_malloc_create" 00:15:55.354 }, 00:15:55.354 { 00:15:55.354 "method": "bdev_wait_for_examine" 00:15:55.354 } 00:15:55.354 ] 00:15:55.354 } 00:15:55.354 ] 00:15:55.354 } 00:15:55.354 [2024-07-13 08:02:01.064743] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:55.354 [2024-07-13 08:02:01.064949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69269 ] 00:15:55.613 [2024-07-13 08:02:01.207225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.613 [2024-07-13 08:02:01.258504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.249  Copying: 512/512 [MB] (average 581 MBps) 00:15:57.249 00:15:57.249 08:02:02 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:15:57.249 08:02:02 -- dd/malloc.sh@33 -- # gen_conf 00:15:57.249 08:02:02 -- dd/common.sh@31 -- # xtrace_disable 00:15:57.249 08:02:02 -- common/autotest_common.sh@10 -- # set +x 00:15:57.249 { 00:15:57.249 "subsystems": [ 00:15:57.249 { 00:15:57.249 "subsystem": "bdev", 00:15:57.249 "config": [ 00:15:57.249 { 00:15:57.249 "params": { 00:15:57.249 "block_size": 512, 00:15:57.249 "name": "malloc0", 00:15:57.249 "num_blocks": 1048576 00:15:57.249 }, 00:15:57.249 "method": "bdev_malloc_create" 00:15:57.249 }, 00:15:57.249 { 00:15:57.249 "params": { 00:15:57.249 "block_size": 512, 00:15:57.249 "name": "malloc1", 00:15:57.249 "num_blocks": 1048576 00:15:57.249 }, 00:15:57.249 "method": "bdev_malloc_create" 00:15:57.249 }, 00:15:57.249 { 00:15:57.249 "method": "bdev_wait_for_examine" 00:15:57.249 } 00:15:57.249 ] 00:15:57.249 } 00:15:57.249 ] 00:15:57.249 } 00:15:57.249 [2024-07-13 08:02:03.054814] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:57.249 [2024-07-13 08:02:03.054991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69302 ] 00:15:57.506 [2024-07-13 08:02:03.183434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.506 [2024-07-13 08:02:03.233686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.140  Copying: 512/512 [MB] (average 584 MBps) 00:15:59.140 00:15:59.140 00:15:59.140 real 0m3.959s 00:15:59.140 user 0m2.831s 00:15:59.140 sys 0m0.840s 00:15:59.140 ************************************ 00:15:59.140 END TEST dd_malloc_copy 00:15:59.140 ************************************ 00:15:59.140 08:02:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:59.140 08:02:04 -- common/autotest_common.sh@10 -- # set +x 00:15:59.140 ************************************ 00:15:59.140 END TEST spdk_dd_malloc 00:15:59.140 ************************************ 00:15:59.140 00:15:59.140 real 0m4.085s 00:15:59.140 user 0m2.885s 00:15:59.140 sys 0m0.918s 00:15:59.140 08:02:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:59.140 08:02:04 -- common/autotest_common.sh@10 -- # set +x 00:15:59.398 08:02:04 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:15:59.398 08:02:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:59.398 08:02:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:59.398 08:02:04 -- common/autotest_common.sh@10 -- # set +x 00:15:59.398 ************************************ 00:15:59.398 START TEST spdk_dd_bdev_to_bdev 00:15:59.398 ************************************ 00:15:59.398 08:02:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:15:59.398 * Looking for test storage... 00:15:59.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:15:59.398 08:02:05 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:59.398 08:02:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.398 08:02:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.398 08:02:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.398 08:02:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:59.398 08:02:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:59.398 08:02:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:59.398 08:02:05 -- paths/export.sh@5 -- # export PATH 00:15:59.398 08:02:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:15:59.398 08:02:05 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:15:59.398 08:02:05 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:15:59.398 08:02:05 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:15:59.398 08:02:05 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:15:59.398 08:02:05 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:15:59.398 08:02:05 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:15:59.398 08:02:05 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:15:59.398 08:02:05 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:15:59.398 08:02:05 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:15:59.398 08:02:05 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:15:59.398 08:02:05 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:15:59.398 08:02:05 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(["name"]=$bdev1 ["filename"]=$aio1 ["block_size"]=4096) 00:15:59.398 08:02:05 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:15:59.398 08:02:05 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:15:59.398 [2024-07-13 08:02:05.206640] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:59.398 [2024-07-13 08:02:05.206826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69405 ] 00:15:59.657 [2024-07-13 08:02:05.340057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.657 [2024-07-13 08:02:05.389688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.172  Copying: 256/256 [MB] (average 1969 MBps) 00:16:00.172 00:16:00.172 08:02:05 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:16:00.172 08:02:05 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:16:00.172 08:02:05 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:16:00.172 08:02:05 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:16:00.172 08:02:05 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:16:00.172 08:02:05 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:16:00.172 08:02:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:00.172 08:02:05 -- common/autotest_common.sh@10 -- # set +x 00:16:00.172 ************************************ 00:16:00.172 START TEST dd_inflate_file 00:16:00.172 ************************************ 00:16:00.172 08:02:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:16:00.172 [2024-07-13 08:02:05.941643] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:00.172 [2024-07-13 08:02:05.941816] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69428 ] 00:16:00.432 [2024-07-13 08:02:06.074771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.432 [2024-07-13 08:02:06.123683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.691  Copying: 64/64 [MB] (average 1777 MBps) 00:16:00.691 00:16:00.691 00:16:00.691 real 0m0.624s 00:16:00.691 user 0m0.221s 00:16:00.691 sys 0m0.201s 00:16:00.691 08:02:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:00.691 ************************************ 00:16:00.691 END TEST dd_inflate_file 00:16:00.691 ************************************ 00:16:00.691 08:02:06 -- common/autotest_common.sh@10 -- # set +x 00:16:00.691 08:02:06 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:16:00.691 08:02:06 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:16:00.691 08:02:06 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:16:00.691 08:02:06 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:16:00.691 08:02:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:00.691 08:02:06 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:16:00.691 08:02:06 -- common/autotest_common.sh@10 -- # set +x 00:16:00.691 08:02:06 -- dd/common.sh@31 -- # xtrace_disable 00:16:00.691 08:02:06 -- common/autotest_common.sh@10 -- # set +x 00:16:00.691 ************************************ 00:16:00.691 START TEST dd_copy_to_out_bdev 00:16:00.691 ************************************ 00:16:00.691 08:02:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:16:00.949 { 00:16:00.949 "subsystems": [ 00:16:00.949 { 00:16:00.949 "subsystem": "bdev", 00:16:00.949 "config": [ 00:16:00.949 { 00:16:00.949 "params": { 00:16:00.949 "block_size": 4096, 00:16:00.949 "name": "aio1", 00:16:00.949 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:16:00.949 }, 00:16:00.949 "method": "bdev_aio_create" 00:16:00.949 }, 00:16:00.949 { 00:16:00.949 "params": { 00:16:00.949 "trtype": "pcie", 00:16:00.949 "name": "Nvme0", 00:16:00.949 "traddr": "0000:00:06.0" 00:16:00.949 }, 00:16:00.949 "method": "bdev_nvme_attach_controller" 00:16:00.949 }, 00:16:00.949 { 00:16:00.949 "method": "bdev_wait_for_examine" 00:16:00.949 } 00:16:00.949 ] 00:16:00.949 } 00:16:00.949 ] 00:16:00.949 } 00:16:00.949 [2024-07-13 08:02:06.626835] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:00.949 [2024-07-13 08:02:06.627098] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69466 ] 00:16:01.207 [2024-07-13 08:02:06.761933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.207 [2024-07-13 08:02:06.812028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.403  Copying: 64/64 [MB] (average 74 MBps) 00:16:02.403 00:16:02.403 ************************************ 00:16:02.403 END TEST dd_copy_to_out_bdev 00:16:02.403 ************************************ 00:16:02.403 00:16:02.403 real 0m1.577s 00:16:02.403 user 0m1.186s 00:16:02.403 sys 0m0.252s 00:16:02.403 08:02:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:02.403 08:02:08 -- common/autotest_common.sh@10 -- # set +x 00:16:02.403 08:02:08 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:16:02.403 08:02:08 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:16:02.403 08:02:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:02.403 08:02:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:02.403 08:02:08 -- common/autotest_common.sh@10 -- # set +x 00:16:02.403 ************************************ 00:16:02.403 START TEST dd_offset_magic 00:16:02.403 ************************************ 00:16:02.403 08:02:08 -- common/autotest_common.sh@1104 -- # offset_magic 00:16:02.403 08:02:08 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:16:02.403 08:02:08 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:16:02.403 08:02:08 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:16:02.403 08:02:08 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:16:02.403 08:02:08 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:16:02.403 08:02:08 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:16:02.403 08:02:08 -- dd/common.sh@31 -- # xtrace_disable 00:16:02.403 08:02:08 -- common/autotest_common.sh@10 -- # set +x 00:16:02.403 { 00:16:02.403 "subsystems": [ 00:16:02.403 { 00:16:02.403 "subsystem": "bdev", 00:16:02.403 "config": [ 00:16:02.403 { 00:16:02.403 "params": { 00:16:02.403 "block_size": 4096, 00:16:02.403 "name": "aio1", 00:16:02.403 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:16:02.403 }, 00:16:02.403 "method": "bdev_aio_create" 00:16:02.403 }, 00:16:02.403 { 00:16:02.403 "params": { 00:16:02.403 "trtype": "pcie", 00:16:02.403 "name": "Nvme0", 00:16:02.403 "traddr": "0000:00:06.0" 00:16:02.403 }, 00:16:02.403 "method": "bdev_nvme_attach_controller" 00:16:02.403 }, 00:16:02.403 { 00:16:02.403 "method": "bdev_wait_for_examine" 00:16:02.403 } 00:16:02.403 ] 00:16:02.403 } 00:16:02.403 ] 00:16:02.403 } 00:16:02.663 [2024-07-13 08:02:08.261129] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:02.663 [2024-07-13 08:02:08.261323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69512 ] 00:16:02.663 [2024-07-13 08:02:08.391134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.663 [2024-07-13 08:02:08.440477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.491  Copying: 65/65 [MB] (average 210 MBps) 00:16:03.491 00:16:03.491 08:02:09 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:16:03.491 08:02:09 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:16:03.491 08:02:09 -- dd/common.sh@31 -- # xtrace_disable 00:16:03.491 08:02:09 -- common/autotest_common.sh@10 -- # set +x 00:16:03.491 { 00:16:03.491 "subsystems": [ 00:16:03.491 { 00:16:03.491 "subsystem": "bdev", 00:16:03.491 "config": [ 00:16:03.491 { 00:16:03.491 "params": { 00:16:03.491 "block_size": 4096, 00:16:03.491 "name": "aio1", 00:16:03.491 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:16:03.491 }, 00:16:03.491 "method": "bdev_aio_create" 00:16:03.491 }, 00:16:03.491 { 00:16:03.491 "params": { 00:16:03.491 "trtype": "pcie", 00:16:03.491 "name": "Nvme0", 00:16:03.491 "traddr": "0000:00:06.0" 00:16:03.491 }, 00:16:03.491 "method": "bdev_nvme_attach_controller" 00:16:03.491 }, 00:16:03.491 { 00:16:03.491 "method": "bdev_wait_for_examine" 00:16:03.491 } 00:16:03.491 ] 00:16:03.491 } 00:16:03.491 ] 00:16:03.491 } 00:16:03.491 [2024-07-13 08:02:09.286138] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:03.491 [2024-07-13 08:02:09.286401] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69537 ] 00:16:03.751 [2024-07-13 08:02:09.448481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.751 [2024-07-13 08:02:09.503701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.270  Copying: 1024/1024 [kB] (average 1000 MBps) 00:16:04.270 00:16:04.270 08:02:09 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:16:04.270 08:02:09 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:16:04.270 08:02:09 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:16:04.270 08:02:09 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:16:04.270 08:02:09 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:16:04.270 08:02:09 -- dd/common.sh@31 -- # xtrace_disable 00:16:04.270 08:02:09 -- common/autotest_common.sh@10 -- # set +x 00:16:04.270 { 00:16:04.270 "subsystems": [ 00:16:04.270 { 00:16:04.270 "subsystem": "bdev", 00:16:04.270 "config": [ 00:16:04.270 { 00:16:04.270 "params": { 00:16:04.270 "block_size": 4096, 00:16:04.270 "name": "aio1", 00:16:04.270 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:16:04.270 }, 00:16:04.270 "method": "bdev_aio_create" 00:16:04.270 }, 00:16:04.270 { 00:16:04.270 "params": { 00:16:04.270 "trtype": "pcie", 00:16:04.270 "name": "Nvme0", 00:16:04.270 "traddr": "0000:00:06.0" 00:16:04.270 }, 00:16:04.270 "method": "bdev_nvme_attach_controller" 00:16:04.270 }, 00:16:04.270 { 00:16:04.270 "method": "bdev_wait_for_examine" 00:16:04.270 } 00:16:04.270 ] 00:16:04.270 } 00:16:04.270 ] 00:16:04.270 } 00:16:04.270 [2024-07-13 08:02:10.062311] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:04.270 [2024-07-13 08:02:10.062880] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69555 ] 00:16:04.529 [2024-07-13 08:02:10.198839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.529 [2024-07-13 08:02:10.248590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.355  Copying: 65/65 [MB] (average 155 MBps) 00:16:05.355 00:16:05.355 08:02:11 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:16:05.355 08:02:11 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:16:05.355 08:02:11 -- dd/common.sh@31 -- # xtrace_disable 00:16:05.355 08:02:11 -- common/autotest_common.sh@10 -- # set +x 00:16:05.355 { 00:16:05.355 "subsystems": [ 00:16:05.355 { 00:16:05.355 "subsystem": "bdev", 00:16:05.355 "config": [ 00:16:05.355 { 00:16:05.355 "params": { 00:16:05.355 "block_size": 4096, 00:16:05.355 "name": "aio1", 00:16:05.355 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:16:05.355 }, 00:16:05.355 "method": "bdev_aio_create" 00:16:05.355 }, 00:16:05.355 { 00:16:05.355 "params": { 00:16:05.355 "trtype": "pcie", 00:16:05.355 "name": "Nvme0", 00:16:05.355 "traddr": "0000:00:06.0" 00:16:05.355 }, 00:16:05.355 "method": "bdev_nvme_attach_controller" 00:16:05.355 }, 00:16:05.355 { 00:16:05.355 "method": "bdev_wait_for_examine" 00:16:05.355 } 00:16:05.355 ] 00:16:05.355 } 00:16:05.355 ] 00:16:05.355 } 00:16:05.614 [2024-07-13 08:02:11.204104] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:05.614 [2024-07-13 08:02:11.204374] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69577 ] 00:16:05.614 [2024-07-13 08:02:11.348925] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.614 [2024-07-13 08:02:11.406233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.132  Copying: 1024/1024 [kB] (average 1000 MBps) 00:16:06.132 00:16:06.132 08:02:11 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:16:06.132 ************************************ 00:16:06.132 END TEST dd_offset_magic 00:16:06.132 ************************************ 00:16:06.132 08:02:11 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:16:06.132 00:16:06.132 real 0m3.657s 00:16:06.132 user 0m1.696s 00:16:06.132 sys 0m0.913s 00:16:06.132 08:02:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:06.132 08:02:11 -- common/autotest_common.sh@10 -- # set +x 00:16:06.132 08:02:11 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:16:06.132 08:02:11 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:16:06.132 08:02:11 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:16:06.132 08:02:11 -- dd/common.sh@11 -- # local nvme_ref= 00:16:06.132 08:02:11 -- dd/common.sh@12 -- # local size=4194330 00:16:06.132 08:02:11 -- dd/common.sh@14 -- # local bs=1048576 00:16:06.132 08:02:11 -- dd/common.sh@15 -- # local count=5 00:16:06.132 08:02:11 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:16:06.132 08:02:11 -- dd/common.sh@18 -- # gen_conf 00:16:06.132 08:02:11 -- dd/common.sh@31 -- # xtrace_disable 00:16:06.132 08:02:11 -- common/autotest_common.sh@10 -- # set +x 00:16:06.132 { 00:16:06.132 "subsystems": [ 00:16:06.132 { 00:16:06.132 "subsystem": "bdev", 00:16:06.132 "config": [ 00:16:06.132 { 00:16:06.132 "params": { 00:16:06.132 "block_size": 4096, 00:16:06.132 "name": "aio1", 00:16:06.132 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:16:06.132 }, 00:16:06.132 "method": "bdev_aio_create" 00:16:06.132 }, 00:16:06.132 { 00:16:06.132 "params": { 00:16:06.132 "trtype": "pcie", 00:16:06.132 "name": "Nvme0", 00:16:06.132 "traddr": "0000:00:06.0" 00:16:06.132 }, 00:16:06.132 "method": "bdev_nvme_attach_controller" 00:16:06.132 }, 00:16:06.132 { 00:16:06.132 "method": "bdev_wait_for_examine" 00:16:06.132 } 00:16:06.132 ] 00:16:06.132 } 00:16:06.132 ] 00:16:06.132 } 00:16:06.392 [2024-07-13 08:02:11.959410] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:06.392 [2024-07-13 08:02:11.959602] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69614 ] 00:16:06.392 [2024-07-13 08:02:12.100905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.392 [2024-07-13 08:02:12.151057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.920  Copying: 5120/5120 [kB] (average 1000 MBps) 00:16:06.920 00:16:06.920 08:02:12 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:16:06.920 08:02:12 -- dd/common.sh@10 -- # local bdev=aio1 00:16:06.920 08:02:12 -- dd/common.sh@11 -- # local nvme_ref= 00:16:06.920 08:02:12 -- dd/common.sh@12 -- # local size=4194330 00:16:06.920 08:02:12 -- dd/common.sh@14 -- # local bs=1048576 00:16:06.920 08:02:12 -- dd/common.sh@15 -- # local count=5 00:16:06.920 08:02:12 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:16:06.920 08:02:12 -- dd/common.sh@18 -- # gen_conf 00:16:06.920 08:02:12 -- dd/common.sh@31 -- # xtrace_disable 00:16:06.920 08:02:12 -- common/autotest_common.sh@10 -- # set +x 00:16:06.920 { 00:16:06.920 "subsystems": [ 00:16:06.920 { 00:16:06.920 "subsystem": "bdev", 00:16:06.920 "config": [ 00:16:06.920 { 00:16:06.920 "params": { 00:16:06.920 "block_size": 4096, 00:16:06.920 "name": "aio1", 00:16:06.920 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:16:06.920 }, 00:16:06.920 "method": "bdev_aio_create" 00:16:06.920 }, 00:16:06.920 { 00:16:06.920 "params": { 00:16:06.920 "trtype": "pcie", 00:16:06.920 "name": "Nvme0", 00:16:06.920 "traddr": "0000:00:06.0" 00:16:06.920 }, 00:16:06.920 "method": "bdev_nvme_attach_controller" 00:16:06.920 }, 00:16:06.920 { 00:16:06.920 "method": "bdev_wait_for_examine" 00:16:06.920 } 00:16:06.920 ] 00:16:06.920 } 00:16:06.920 ] 00:16:06.920 } 00:16:06.920 [2024-07-13 08:02:12.655445] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:06.920 [2024-07-13 08:02:12.655634] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69631 ] 00:16:07.188 [2024-07-13 08:02:12.789417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.188 [2024-07-13 08:02:12.839472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.446  Copying: 5120/5120 [kB] (average 156 MBps) 00:16:07.446 00:16:07.446 08:02:13 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:16:07.705 ************************************ 00:16:07.705 END TEST spdk_dd_bdev_to_bdev 00:16:07.705 ************************************ 00:16:07.705 00:16:07.705 real 0m8.291s 00:16:07.705 user 0m4.138s 00:16:07.705 sys 0m2.244s 00:16:07.705 08:02:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:07.705 08:02:13 -- common/autotest_common.sh@10 -- # set +x 00:16:07.705 08:02:13 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:16:07.705 08:02:13 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:16:07.705 08:02:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:07.705 08:02:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:07.705 08:02:13 -- common/autotest_common.sh@10 -- # set +x 00:16:07.705 ************************************ 00:16:07.705 START TEST spdk_dd_sparse 00:16:07.705 ************************************ 00:16:07.705 08:02:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:16:07.705 * Looking for test storage... 00:16:07.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:16:07.705 08:02:13 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:07.705 08:02:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.705 08:02:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.705 08:02:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.705 08:02:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:16:07.705 08:02:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:16:07.705 08:02:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:16:07.705 08:02:13 -- paths/export.sh@5 -- # export PATH 00:16:07.705 08:02:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:16:07.705 08:02:13 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:16:07.705 08:02:13 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:16:07.705 08:02:13 -- dd/sparse.sh@110 -- # file1=file_zero1 00:16:07.705 08:02:13 -- dd/sparse.sh@111 -- # file2=file_zero2 00:16:07.705 08:02:13 -- dd/sparse.sh@112 -- # file3=file_zero3 00:16:07.705 08:02:13 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:16:07.705 08:02:13 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:16:07.705 08:02:13 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:16:07.705 08:02:13 -- dd/sparse.sh@118 -- # prepare 00:16:07.705 08:02:13 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:16:07.705 08:02:13 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:16:07.705 1+0 records in 00:16:07.705 1+0 records out 00:16:07.705 4194304 bytes (4.2 MB) copied, 0.0065358 s, 642 MB/s 00:16:07.705 08:02:13 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:16:07.705 1+0 records in 00:16:07.705 1+0 records out 00:16:07.705 4194304 bytes (4.2 MB) copied, 0.00330063 s, 1.3 GB/s 00:16:07.705 08:02:13 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:16:07.705 1+0 records in 00:16:07.705 1+0 records out 00:16:07.705 4194304 bytes (4.2 MB) copied, 0.00363942 s, 1.2 GB/s 00:16:07.705 08:02:13 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:16:07.705 08:02:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:07.705 08:02:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:07.705 08:02:13 -- common/autotest_common.sh@10 -- # set +x 00:16:07.705 ************************************ 00:16:07.705 START TEST dd_sparse_file_to_file 00:16:07.705 ************************************ 00:16:07.705 08:02:13 -- common/autotest_common.sh@1104 -- # file_to_file 00:16:07.705 08:02:13 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:16:07.705 08:02:13 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:16:07.705 08:02:13 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:16:07.705 08:02:13 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:16:07.705 08:02:13 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(["bdev_name"]=$aio_bdev ["lvs_name"]=$lvstore) 00:16:07.705 08:02:13 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:16:07.705 08:02:13 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:16:07.705 08:02:13 -- dd/sparse.sh@41 -- # gen_conf 00:16:07.705 08:02:13 -- dd/common.sh@31 -- # xtrace_disable 00:16:07.705 08:02:13 -- common/autotest_common.sh@10 -- # set +x 00:16:07.964 { 00:16:07.964 "subsystems": [ 00:16:07.964 { 00:16:07.964 "subsystem": "bdev", 00:16:07.964 "config": [ 00:16:07.964 { 00:16:07.964 "params": { 00:16:07.964 "block_size": 4096, 00:16:07.964 "name": "dd_aio", 00:16:07.964 "filename": "dd_sparse_aio_disk" 00:16:07.964 }, 00:16:07.964 "method": "bdev_aio_create" 00:16:07.964 }, 00:16:07.964 { 00:16:07.964 "params": { 00:16:07.964 "bdev_name": "dd_aio", 00:16:07.964 "lvs_name": "dd_lvstore" 00:16:07.964 }, 00:16:07.964 "method": "bdev_lvol_create_lvstore" 00:16:07.964 }, 00:16:07.964 { 00:16:07.964 "method": "bdev_wait_for_examine" 00:16:07.964 } 00:16:07.964 ] 00:16:07.964 } 00:16:07.964 ] 00:16:07.964 } 00:16:07.964 [2024-07-13 08:02:13.605909] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:07.964 [2024-07-13 08:02:13.606109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69707 ] 00:16:07.964 [2024-07-13 08:02:13.741368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.222 [2024-07-13 08:02:13.791520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.480  Copying: 12/36 [MB] (average 1333 MBps) 00:16:08.480 00:16:08.480 08:02:14 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:16:08.480 08:02:14 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:16:08.480 08:02:14 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:16:08.480 08:02:14 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:16:08.480 08:02:14 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:16:08.480 08:02:14 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:16:08.480 08:02:14 -- dd/sparse.sh@52 -- # stat1_b=24576 00:16:08.480 08:02:14 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:16:08.480 ************************************ 00:16:08.480 END TEST dd_sparse_file_to_file 00:16:08.480 ************************************ 00:16:08.480 08:02:14 -- dd/sparse.sh@53 -- # stat2_b=24576 00:16:08.480 08:02:14 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:16:08.480 00:16:08.480 real 0m0.689s 00:16:08.480 user 0m0.327s 00:16:08.480 sys 0m0.215s 00:16:08.480 08:02:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:08.480 08:02:14 -- common/autotest_common.sh@10 -- # set +x 00:16:08.480 08:02:14 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:16:08.480 08:02:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:08.480 08:02:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:08.480 08:02:14 -- common/autotest_common.sh@10 -- # set +x 00:16:08.480 ************************************ 00:16:08.480 START TEST dd_sparse_file_to_bdev 00:16:08.480 ************************************ 00:16:08.480 08:02:14 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:16:08.480 08:02:14 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:16:08.480 08:02:14 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:16:08.480 08:02:14 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(["lvs_name"]=$lvstore ["lvol_name"]=$lvol ["size"]=37748736 ["thin_provision"]=true) 00:16:08.480 08:02:14 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:16:08.480 08:02:14 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:16:08.480 08:02:14 -- dd/sparse.sh@73 -- # gen_conf 00:16:08.480 08:02:14 -- dd/common.sh@31 -- # xtrace_disable 00:16:08.480 08:02:14 -- common/autotest_common.sh@10 -- # set +x 00:16:08.480 { 00:16:08.480 "subsystems": [ 00:16:08.480 { 00:16:08.480 "subsystem": "bdev", 00:16:08.480 "config": [ 00:16:08.480 { 00:16:08.480 "params": { 00:16:08.480 "block_size": 4096, 00:16:08.480 "name": "dd_aio", 00:16:08.480 "filename": "dd_sparse_aio_disk" 00:16:08.480 }, 00:16:08.480 "method": "bdev_aio_create" 00:16:08.480 }, 00:16:08.480 { 00:16:08.480 "params": { 00:16:08.480 "thin_provision": true, 00:16:08.480 "size": 37748736, 00:16:08.480 "lvol_name": "dd_lvol", 00:16:08.480 "lvs_name": "dd_lvstore" 00:16:08.480 }, 00:16:08.480 "method": "bdev_lvol_create" 00:16:08.480 }, 00:16:08.480 { 00:16:08.480 "method": "bdev_wait_for_examine" 00:16:08.480 } 00:16:08.480 ] 00:16:08.480 } 00:16:08.480 ] 00:16:08.480 } 00:16:08.740 [2024-07-13 08:02:14.341287] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:08.740 [2024-07-13 08:02:14.341491] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69760 ] 00:16:08.740 [2024-07-13 08:02:14.471976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.740 [2024-07-13 08:02:14.521867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.999 [2024-07-13 08:02:14.610967] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:16:08.999  Copying: 12/36 [MB] (average 324 MBps)[2024-07-13 08:02:14.671658] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:16:09.257 00:16:09.257 00:16:09.257 ************************************ 00:16:09.257 END TEST dd_sparse_file_to_bdev 00:16:09.257 ************************************ 00:16:09.257 00:16:09.257 real 0m0.687s 00:16:09.257 user 0m0.331s 00:16:09.257 sys 0m0.203s 00:16:09.257 08:02:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:09.257 08:02:14 -- common/autotest_common.sh@10 -- # set +x 00:16:09.257 08:02:14 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:16:09.257 08:02:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:09.257 08:02:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:09.257 08:02:14 -- common/autotest_common.sh@10 -- # set +x 00:16:09.257 ************************************ 00:16:09.257 START TEST dd_sparse_bdev_to_file 00:16:09.257 ************************************ 00:16:09.257 08:02:14 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:16:09.257 08:02:14 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:16:09.257 08:02:14 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:16:09.257 08:02:14 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:16:09.257 08:02:14 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:16:09.257 08:02:14 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:16:09.257 08:02:14 -- dd/sparse.sh@91 -- # gen_conf 00:16:09.257 08:02:14 -- dd/common.sh@31 -- # xtrace_disable 00:16:09.257 08:02:14 -- common/autotest_common.sh@10 -- # set +x 00:16:09.257 { 00:16:09.257 "subsystems": [ 00:16:09.257 { 00:16:09.257 "subsystem": "bdev", 00:16:09.257 "config": [ 00:16:09.257 { 00:16:09.258 "params": { 00:16:09.258 "block_size": 4096, 00:16:09.258 "name": "dd_aio", 00:16:09.258 "filename": "dd_sparse_aio_disk" 00:16:09.258 }, 00:16:09.258 "method": "bdev_aio_create" 00:16:09.258 }, 00:16:09.258 { 00:16:09.258 "method": "bdev_wait_for_examine" 00:16:09.258 } 00:16:09.258 ] 00:16:09.258 } 00:16:09.258 ] 00:16:09.258 } 00:16:09.516 [2024-07-13 08:02:15.086054] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:09.516 [2024-07-13 08:02:15.086221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69810 ] 00:16:09.516 [2024-07-13 08:02:15.215817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.516 [2024-07-13 08:02:15.282192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.035  Copying: 12/36 [MB] (average 1333 MBps) 00:16:10.035 00:16:10.035 08:02:15 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:16:10.035 08:02:15 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:16:10.035 08:02:15 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:16:10.035 08:02:15 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:16:10.035 08:02:15 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:16:10.035 08:02:15 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:16:10.035 08:02:15 -- dd/sparse.sh@102 -- # stat2_b=24576 00:16:10.035 08:02:15 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:16:10.035 08:02:15 -- dd/sparse.sh@103 -- # stat3_b=24576 00:16:10.035 08:02:15 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:16:10.035 00:16:10.035 real 0m0.684s 00:16:10.035 user 0m0.335s 00:16:10.035 sys 0m0.203s 00:16:10.035 08:02:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:10.035 ************************************ 00:16:10.035 END TEST dd_sparse_bdev_to_file 00:16:10.035 ************************************ 00:16:10.035 08:02:15 -- common/autotest_common.sh@10 -- # set +x 00:16:10.035 08:02:15 -- dd/sparse.sh@1 -- # cleanup 00:16:10.035 08:02:15 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:16:10.035 08:02:15 -- dd/sparse.sh@12 -- # rm file_zero1 00:16:10.035 08:02:15 -- dd/sparse.sh@13 -- # rm file_zero2 00:16:10.035 08:02:15 -- dd/sparse.sh@14 -- # rm file_zero3 00:16:10.035 00:16:10.035 real 0m2.373s 00:16:10.035 user 0m1.110s 00:16:10.035 sys 0m0.800s 00:16:10.035 08:02:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:10.035 08:02:15 -- common/autotest_common.sh@10 -- # set +x 00:16:10.035 ************************************ 00:16:10.035 END TEST spdk_dd_sparse 00:16:10.035 ************************************ 00:16:10.035 08:02:15 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:16:10.035 08:02:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:10.035 08:02:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:10.035 08:02:15 -- common/autotest_common.sh@10 -- # set +x 00:16:10.035 ************************************ 00:16:10.035 START TEST spdk_dd_negative 00:16:10.035 ************************************ 00:16:10.035 08:02:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:16:10.035 * Looking for test storage... 00:16:10.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:16:10.035 08:02:15 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:10.035 08:02:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.035 08:02:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.035 08:02:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.035 08:02:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:16:10.035 08:02:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:16:10.035 08:02:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:16:10.035 08:02:15 -- paths/export.sh@5 -- # export PATH 00:16:10.035 08:02:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:16:10.035 08:02:15 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:16:10.035 08:02:15 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:16:10.035 08:02:15 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:16:10.035 08:02:15 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:16:10.295 08:02:15 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:16:10.295 08:02:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:10.295 08:02:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:10.295 08:02:15 -- common/autotest_common.sh@10 -- # set +x 00:16:10.295 ************************************ 00:16:10.295 START TEST dd_invalid_arguments 00:16:10.295 ************************************ 00:16:10.295 08:02:15 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:16:10.295 08:02:15 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:16:10.295 08:02:15 -- common/autotest_common.sh@640 -- # local es=0 00:16:10.295 08:02:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:16:10.295 08:02:15 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:10.296 08:02:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:10.296 08:02:15 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:10.296 08:02:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:10.296 08:02:15 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:10.296 08:02:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:10.296 08:02:15 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:10.296 08:02:15 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:10.296 08:02:15 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:16:10.296 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:16:10.296 options: 00:16:10.296 -c, --config JSON config file (default none) 00:16:10.296 --json JSON config file (default none) 00:16:10.296 --json-ignore-init-errors 00:16:10.296 don't exit on invalid config entry 00:16:10.296 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:16:10.296 -g, --single-file-segments 00:16:10.296 force creating just one hugetlbfs file 00:16:10.296 -h, --help show this usage 00:16:10.296 -i, --shm-id shared memory ID (optional) 00:16:10.296 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:16:10.296 --lcores lcore to CPU mapping list. The list is in the format: 00:16:10.296 [<,lcores[@CPUs]>...] 00:16:10.296 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:16:10.296 Within the group, '-' is used for range separator, 00:16:10.296 ',' is used for single number separator. 00:16:10.296 '( )' can be omitted for single element group, 00:16:10.296 '@' can be omitted if cpus and lcores have the same value 00:16:10.296 -n, --mem-channels channel number of memory channels used for DPDK 00:16:10.296 -p, --main-core main (primary) core for DPDK 00:16:10.296 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:16:10.296 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:16:10.296 --disable-cpumask-locks Disable CPU core lock files. 00:16:10.296 --silence-noticelog disable notice level logging to stderr 00:16:10.296 --msg-mempool-size global message memory pool size in count (default: 262143) 00:16:10.296 -u, --no-pci disable PCI access 00:16:10.296 --wait-for-rpc wait for RPCs to initialize subsystems 00:16:10.296 --max-delay maximum reactor delay (in microseconds) 00:16:10.296 -B, --pci-blocked pci addr to block (can be used more than once) 00:16:10.296 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:16:10.296 -R, --huge-unlink unlink huge files after initialization 00:16:10.296 -v, --version print SPDK version 00:16:10.296 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:16:10.296 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:16:10.296 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:16:10.296 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:16:10.296 Tracepoints vary in size and can use more than one trace entry. 00:16:10.296 --rpcs-allowed comma-separated list of permitted RPCS 00:16:10.296 --env-context Opaque context for use of the env implementation 00:16:10.296 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:16:10.296 --no-huge run without using hugepages 00:16:10.296 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_daos, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:16:10.296 -e, --tpoint-group [:] 00:16:10.296 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:16:10.296 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:16:10.296 Groups and masks can be c/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:16:10.296 [2024-07-13 08:02:15.992971] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:16:10.296 ombined (e.g. thread,bdev:0x1). 00:16:10.296 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:16:10.296 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:16:10.296 [--------- DD Options ---------] 00:16:10.296 --if Input file. Must specify either --if or --ib. 00:16:10.296 --ib Input bdev. Must specifier either --if or --ib 00:16:10.296 --of Output file. Must specify either --of or --ob. 00:16:10.296 --ob Output bdev. Must specify either --of or --ob. 00:16:10.296 --iflag Input file flags. 00:16:10.296 --oflag Output file flags. 00:16:10.296 --bs I/O unit size (default: 4096) 00:16:10.296 --qd Queue depth (default: 2) 00:16:10.296 --count I/O unit count. The number of I/O units to copy. (default: all) 00:16:10.296 --skip Skip this many I/O units at start of input. (default: 0) 00:16:10.296 --seek Skip this many I/O units at start of output. (default: 0) 00:16:10.296 --aio Force usage of AIO. (by default io_uring is used if available) 00:16:10.296 --sparse Enable hole skipping in input target 00:16:10.296 Available iflag and oflag values: 00:16:10.296 append - append mode 00:16:10.296 direct - use direct I/O for data 00:16:10.296 directory - fail unless a directory 00:16:10.296 dsync - use synchronized I/O for data 00:16:10.296 noatime - do not update access time 00:16:10.296 noctty - do not assign controlling terminal from file 00:16:10.296 nofollow - do not follow symlinks 00:16:10.296 nonblock - use non-blocking I/O 00:16:10.296 sync - use synchronized I/O for data and metadata 00:16:10.296 ************************************ 00:16:10.296 END TEST dd_invalid_arguments 00:16:10.296 ************************************ 00:16:10.296 08:02:16 -- common/autotest_common.sh@643 -- # es=2 00:16:10.296 08:02:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:10.296 08:02:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:10.296 08:02:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:10.296 00:16:10.296 real 0m0.160s 00:16:10.296 user 0m0.033s 00:16:10.296 sys 0m0.032s 00:16:10.296 08:02:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:10.296 08:02:16 -- common/autotest_common.sh@10 -- # set +x 00:16:10.296 08:02:16 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:16:10.296 08:02:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:10.296 08:02:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:10.296 08:02:16 -- common/autotest_common.sh@10 -- # set +x 00:16:10.296 ************************************ 00:16:10.296 START TEST dd_double_input 00:16:10.296 ************************************ 00:16:10.296 08:02:16 -- common/autotest_common.sh@1104 -- # double_input 00:16:10.296 08:02:16 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:16:10.296 08:02:16 -- common/autotest_common.sh@640 -- # local es=0 00:16:10.296 08:02:16 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:16:10.296 08:02:16 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:10.296 08:02:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:10.296 08:02:16 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:10.296 08:02:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:10.296 08:02:16 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:10.296 08:02:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:10.296 08:02:16 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:10.296 08:02:16 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:10.296 08:02:16 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:16:10.556 [2024-07-13 08:02:16.196517] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:16:10.556 ************************************ 00:16:10.556 END TEST dd_double_input 00:16:10.556 ************************************ 00:16:10.556 08:02:16 -- common/autotest_common.sh@643 -- # es=22 00:16:10.556 08:02:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:10.556 08:02:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:10.556 08:02:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:10.556 00:16:10.556 real 0m0.155s 00:16:10.556 user 0m0.027s 00:16:10.556 sys 0m0.031s 00:16:10.556 08:02:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:10.556 08:02:16 -- common/autotest_common.sh@10 -- # set +x 00:16:10.556 08:02:16 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:16:10.556 08:02:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:10.556 08:02:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:10.556 08:02:16 -- common/autotest_common.sh@10 -- # set +x 00:16:10.556 ************************************ 00:16:10.556 START TEST dd_double_output 00:16:10.556 ************************************ 00:16:10.556 08:02:16 -- common/autotest_common.sh@1104 -- # double_output 00:16:10.556 08:02:16 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:16:10.556 08:02:16 -- common/autotest_common.sh@640 -- # local es=0 00:16:10.556 08:02:16 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:16:10.556 08:02:16 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:10.556 08:02:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:10.556 08:02:16 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:10.556 08:02:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:10.556 08:02:16 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:10.556 08:02:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:10.556 08:02:16 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:10.556 08:02:16 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:10.556 08:02:16 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:16:10.815 [2024-07-13 08:02:16.397934] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:16:10.815 08:02:16 -- common/autotest_common.sh@643 -- # es=22 00:16:10.815 08:02:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:10.815 08:02:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:10.815 08:02:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:10.815 00:16:10.815 real 0m0.159s 00:16:10.815 user 0m0.034s 00:16:10.815 sys 0m0.029s 00:16:10.815 08:02:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:10.815 08:02:16 -- common/autotest_common.sh@10 -- # set +x 00:16:10.815 ************************************ 00:16:10.815 END TEST dd_double_output 00:16:10.815 ************************************ 00:16:10.816 08:02:16 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:16:10.816 08:02:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:10.816 08:02:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:10.816 08:02:16 -- common/autotest_common.sh@10 -- # set +x 00:16:10.816 ************************************ 00:16:10.816 START TEST dd_no_input 00:16:10.816 ************************************ 00:16:10.816 08:02:16 -- common/autotest_common.sh@1104 -- # no_input 00:16:10.816 08:02:16 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:16:10.816 08:02:16 -- common/autotest_common.sh@640 -- # local es=0 00:16:10.816 08:02:16 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:16:10.816 08:02:16 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:10.816 08:02:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:10.816 08:02:16 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:10.816 08:02:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:10.816 08:02:16 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:10.816 08:02:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:10.816 08:02:16 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:10.816 08:02:16 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:10.816 08:02:16 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:16:10.816 [2024-07-13 08:02:16.607973] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:16:11.073 08:02:16 -- common/autotest_common.sh@643 -- # es=22 00:16:11.073 08:02:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:11.073 08:02:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:11.073 08:02:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:11.073 00:16:11.073 real 0m0.161s 00:16:11.073 user 0m0.030s 00:16:11.073 sys 0m0.036s 00:16:11.073 08:02:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:11.073 ************************************ 00:16:11.073 END TEST dd_no_input 00:16:11.073 ************************************ 00:16:11.073 08:02:16 -- common/autotest_common.sh@10 -- # set +x 00:16:11.073 08:02:16 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:16:11.073 08:02:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:11.073 08:02:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:11.073 08:02:16 -- common/autotest_common.sh@10 -- # set +x 00:16:11.073 ************************************ 00:16:11.073 START TEST dd_no_output 00:16:11.073 ************************************ 00:16:11.073 08:02:16 -- common/autotest_common.sh@1104 -- # no_output 00:16:11.073 08:02:16 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:16:11.073 08:02:16 -- common/autotest_common.sh@640 -- # local es=0 00:16:11.073 08:02:16 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:16:11.073 08:02:16 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:11.073 08:02:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:11.073 08:02:16 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:11.073 08:02:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:11.073 08:02:16 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:11.073 08:02:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:11.073 08:02:16 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:11.073 08:02:16 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:11.073 08:02:16 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:16:11.073 [2024-07-13 08:02:16.813620] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:16:11.073 ************************************ 00:16:11.073 END TEST dd_no_output 00:16:11.073 ************************************ 00:16:11.073 08:02:16 -- common/autotest_common.sh@643 -- # es=22 00:16:11.073 08:02:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:11.073 08:02:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:11.073 08:02:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:11.073 00:16:11.073 real 0m0.156s 00:16:11.073 user 0m0.025s 00:16:11.073 sys 0m0.035s 00:16:11.073 08:02:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:11.073 08:02:16 -- common/autotest_common.sh@10 -- # set +x 00:16:11.073 08:02:16 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:16:11.073 08:02:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:11.073 08:02:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:11.073 08:02:16 -- common/autotest_common.sh@10 -- # set +x 00:16:11.073 ************************************ 00:16:11.073 START TEST dd_wrong_blocksize 00:16:11.073 ************************************ 00:16:11.073 08:02:16 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:16:11.073 08:02:16 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:16:11.073 08:02:16 -- common/autotest_common.sh@640 -- # local es=0 00:16:11.073 08:02:16 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:16:11.073 08:02:16 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:11.073 08:02:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:11.073 08:02:16 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:11.331 08:02:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:11.331 08:02:16 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:11.331 08:02:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:11.331 08:02:16 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:11.331 08:02:16 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:11.331 08:02:16 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:16:11.331 [2024-07-13 08:02:17.016883] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:16:11.331 ************************************ 00:16:11.331 END TEST dd_wrong_blocksize 00:16:11.331 ************************************ 00:16:11.331 08:02:17 -- common/autotest_common.sh@643 -- # es=22 00:16:11.331 08:02:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:11.331 08:02:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:11.331 08:02:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:11.331 00:16:11.331 real 0m0.157s 00:16:11.331 user 0m0.026s 00:16:11.331 sys 0m0.036s 00:16:11.331 08:02:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:11.331 08:02:17 -- common/autotest_common.sh@10 -- # set +x 00:16:11.331 08:02:17 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:16:11.331 08:02:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:11.331 08:02:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:11.331 08:02:17 -- common/autotest_common.sh@10 -- # set +x 00:16:11.331 ************************************ 00:16:11.331 START TEST dd_smaller_blocksize 00:16:11.331 ************************************ 00:16:11.331 08:02:17 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:16:11.332 08:02:17 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:16:11.332 08:02:17 -- common/autotest_common.sh@640 -- # local es=0 00:16:11.332 08:02:17 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:16:11.332 08:02:17 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:11.332 08:02:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:11.332 08:02:17 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:11.332 08:02:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:11.332 08:02:17 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:11.332 08:02:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:11.332 08:02:17 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:11.332 08:02:17 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:11.332 08:02:17 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:16:11.590 [2024-07-13 08:02:17.225598] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:11.590 [2024-07-13 08:02:17.225779] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70070 ] 00:16:11.590 [2024-07-13 08:02:17.385077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.849 [2024-07-13 08:02:17.434449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.849 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:16:11.849 [2024-07-13 08:02:17.581732] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:16:11.849 [2024-07-13 08:02:17.581814] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:12.107 [2024-07-13 08:02:17.688475] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:16:12.107 ************************************ 00:16:12.107 END TEST dd_smaller_blocksize 00:16:12.107 ************************************ 00:16:12.107 08:02:17 -- common/autotest_common.sh@643 -- # es=244 00:16:12.107 08:02:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:12.107 08:02:17 -- common/autotest_common.sh@652 -- # es=116 00:16:12.107 08:02:17 -- common/autotest_common.sh@653 -- # case "$es" in 00:16:12.107 08:02:17 -- common/autotest_common.sh@660 -- # es=1 00:16:12.107 08:02:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:12.107 00:16:12.107 real 0m0.688s 00:16:12.107 user 0m0.254s 00:16:12.107 sys 0m0.237s 00:16:12.107 08:02:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.107 08:02:17 -- common/autotest_common.sh@10 -- # set +x 00:16:12.107 08:02:17 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:16:12.107 08:02:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:12.107 08:02:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:12.107 08:02:17 -- common/autotest_common.sh@10 -- # set +x 00:16:12.107 ************************************ 00:16:12.107 START TEST dd_invalid_count 00:16:12.107 ************************************ 00:16:12.107 08:02:17 -- common/autotest_common.sh@1104 -- # invalid_count 00:16:12.107 08:02:17 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:16:12.107 08:02:17 -- common/autotest_common.sh@640 -- # local es=0 00:16:12.107 08:02:17 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:16:12.107 08:02:17 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.107 08:02:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:12.107 08:02:17 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.107 08:02:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:12.107 08:02:17 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.107 08:02:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:12.107 08:02:17 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.107 08:02:17 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:12.107 08:02:17 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:16:12.366 [2024-07-13 08:02:17.960233] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:16:12.366 ************************************ 00:16:12.366 END TEST dd_invalid_count 00:16:12.366 ************************************ 00:16:12.366 08:02:17 -- common/autotest_common.sh@643 -- # es=22 00:16:12.366 08:02:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:12.366 08:02:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:12.366 08:02:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:12.366 00:16:12.366 real 0m0.161s 00:16:12.366 user 0m0.032s 00:16:12.366 sys 0m0.035s 00:16:12.366 08:02:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.366 08:02:17 -- common/autotest_common.sh@10 -- # set +x 00:16:12.366 08:02:18 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:16:12.366 08:02:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:12.366 08:02:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:12.366 08:02:18 -- common/autotest_common.sh@10 -- # set +x 00:16:12.366 ************************************ 00:16:12.366 START TEST dd_invalid_oflag 00:16:12.366 ************************************ 00:16:12.366 08:02:18 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:16:12.366 08:02:18 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:16:12.366 08:02:18 -- common/autotest_common.sh@640 -- # local es=0 00:16:12.366 08:02:18 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:16:12.366 08:02:18 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.366 08:02:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:12.366 08:02:18 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.366 08:02:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:12.366 08:02:18 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.366 08:02:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:12.366 08:02:18 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.366 08:02:18 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:12.366 08:02:18 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:16:12.366 [2024-07-13 08:02:18.168726] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:16:12.625 08:02:18 -- common/autotest_common.sh@643 -- # es=22 00:16:12.625 08:02:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:12.625 08:02:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:12.625 08:02:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:12.625 ************************************ 00:16:12.625 END TEST dd_invalid_oflag 00:16:12.625 ************************************ 00:16:12.625 00:16:12.625 real 0m0.163s 00:16:12.625 user 0m0.036s 00:16:12.625 sys 0m0.032s 00:16:12.625 08:02:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.625 08:02:18 -- common/autotest_common.sh@10 -- # set +x 00:16:12.625 08:02:18 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:16:12.625 08:02:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:12.625 08:02:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:12.625 08:02:18 -- common/autotest_common.sh@10 -- # set +x 00:16:12.625 ************************************ 00:16:12.625 START TEST dd_invalid_iflag 00:16:12.625 ************************************ 00:16:12.625 08:02:18 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:16:12.625 08:02:18 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:16:12.625 08:02:18 -- common/autotest_common.sh@640 -- # local es=0 00:16:12.625 08:02:18 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:16:12.625 08:02:18 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.625 08:02:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:12.625 08:02:18 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.625 08:02:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:12.625 08:02:18 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.625 08:02:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:12.625 08:02:18 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.625 08:02:18 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:12.625 08:02:18 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:16:12.625 [2024-07-13 08:02:18.376974] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:16:12.625 ************************************ 00:16:12.625 END TEST dd_invalid_iflag 00:16:12.625 ************************************ 00:16:12.625 08:02:18 -- common/autotest_common.sh@643 -- # es=22 00:16:12.625 08:02:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:12.625 08:02:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:12.625 08:02:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:12.625 00:16:12.625 real 0m0.155s 00:16:12.625 user 0m0.030s 00:16:12.625 sys 0m0.029s 00:16:12.625 08:02:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.625 08:02:18 -- common/autotest_common.sh@10 -- # set +x 00:16:12.625 08:02:18 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:16:12.625 08:02:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:12.625 08:02:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:12.625 08:02:18 -- common/autotest_common.sh@10 -- # set +x 00:16:12.884 ************************************ 00:16:12.884 START TEST dd_unknown_flag 00:16:12.884 ************************************ 00:16:12.884 08:02:18 -- common/autotest_common.sh@1104 -- # unknown_flag 00:16:12.884 08:02:18 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:16:12.884 08:02:18 -- common/autotest_common.sh@640 -- # local es=0 00:16:12.884 08:02:18 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:16:12.884 08:02:18 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.884 08:02:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:12.884 08:02:18 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.884 08:02:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:12.884 08:02:18 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.884 08:02:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:12.884 08:02:18 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.884 08:02:18 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:12.884 08:02:18 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:16:12.884 [2024-07-13 08:02:18.588094] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:12.884 [2024-07-13 08:02:18.588356] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70179 ] 00:16:13.143 [2024-07-13 08:02:18.725961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.143 [2024-07-13 08:02:18.775217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.143 [2024-07-13 08:02:18.853022] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:16:13.143 [2024-07-13 08:02:18.853101] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Invalid argument 00:16:13.143 [2024-07-13 08:02:18.853122] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Invalid argument 00:16:13.143 [2024-07-13 08:02:18.853168] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:13.403 [2024-07-13 08:02:18.957250] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:16:13.403 08:02:19 -- common/autotest_common.sh@643 -- # es=234 00:16:13.403 08:02:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:13.403 08:02:19 -- common/autotest_common.sh@652 -- # es=106 00:16:13.403 08:02:19 -- common/autotest_common.sh@653 -- # case "$es" in 00:16:13.403 08:02:19 -- common/autotest_common.sh@660 -- # es=1 00:16:13.403 08:02:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:13.403 ************************************ 00:16:13.403 END TEST dd_unknown_flag 00:16:13.403 ************************************ 00:16:13.403 00:16:13.403 real 0m0.600s 00:16:13.403 user 0m0.232s 00:16:13.403 sys 0m0.169s 00:16:13.403 08:02:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:13.403 08:02:19 -- common/autotest_common.sh@10 -- # set +x 00:16:13.403 08:02:19 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:16:13.403 08:02:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:13.403 08:02:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:13.403 08:02:19 -- common/autotest_common.sh@10 -- # set +x 00:16:13.403 ************************************ 00:16:13.403 START TEST dd_invalid_json 00:16:13.403 ************************************ 00:16:13.403 08:02:19 -- common/autotest_common.sh@1104 -- # invalid_json 00:16:13.403 08:02:19 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:16:13.403 08:02:19 -- common/autotest_common.sh@640 -- # local es=0 00:16:13.403 08:02:19 -- dd/negative_dd.sh@95 -- # : 00:16:13.403 08:02:19 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:16:13.403 08:02:19 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:13.403 08:02:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:13.403 08:02:19 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:13.403 08:02:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:13.403 08:02:19 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:13.403 08:02:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:13.403 08:02:19 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:13.403 08:02:19 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:13.403 08:02:19 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:16:13.661 [2024-07-13 08:02:19.232278] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:13.661 [2024-07-13 08:02:19.232485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70220 ] 00:16:13.661 [2024-07-13 08:02:19.362288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.661 [2024-07-13 08:02:19.411002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.661 [2024-07-13 08:02:19.411204] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:16:13.661 [2024-07-13 08:02:19.411240] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:13.661 [2024-07-13 08:02:19.411294] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:16:13.920 08:02:19 -- common/autotest_common.sh@643 -- # es=234 00:16:13.920 08:02:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:13.920 08:02:19 -- common/autotest_common.sh@652 -- # es=106 00:16:13.920 08:02:19 -- common/autotest_common.sh@653 -- # case "$es" in 00:16:13.920 08:02:19 -- common/autotest_common.sh@660 -- # es=1 00:16:13.920 08:02:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:13.920 00:16:13.920 real 0m0.401s 00:16:13.920 user 0m0.123s 00:16:13.920 sys 0m0.082s 00:16:13.920 08:02:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:13.920 ************************************ 00:16:13.920 END TEST dd_invalid_json 00:16:13.920 ************************************ 00:16:13.920 08:02:19 -- common/autotest_common.sh@10 -- # set +x 00:16:13.920 ************************************ 00:16:13.920 END TEST spdk_dd_negative 00:16:13.920 ************************************ 00:16:13.920 00:16:13.920 real 0m3.790s 00:16:13.920 user 0m1.120s 00:16:13.920 sys 0m1.191s 00:16:13.920 08:02:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:13.920 08:02:19 -- common/autotest_common.sh@10 -- # set +x 00:16:13.920 ************************************ 00:16:13.920 END TEST spdk_dd 00:16:13.920 ************************************ 00:16:13.920 00:16:13.920 real 0m59.786s 00:16:13.920 user 0m27.931s 00:16:13.920 sys 0m16.031s 00:16:13.920 08:02:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:13.920 08:02:19 -- common/autotest_common.sh@10 -- # set +x 00:16:13.920 08:02:19 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:16:13.920 08:02:19 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:16:13.920 08:02:19 -- spdk/autotest.sh@268 -- # timing_exit lib 00:16:13.920 08:02:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:13.920 08:02:19 -- common/autotest_common.sh@10 -- # set +x 00:16:13.920 08:02:19 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:16:13.920 08:02:19 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:16:13.920 08:02:19 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:16:13.920 08:02:19 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:16:13.920 08:02:19 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:16:13.920 08:02:19 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:16:13.920 08:02:19 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:16:13.920 08:02:19 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:16:13.920 08:02:19 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:16:13.920 08:02:19 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:16:13.920 08:02:19 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:16:13.920 08:02:19 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:16:13.920 08:02:19 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:16:13.920 08:02:19 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:16:13.920 08:02:19 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:16:13.920 08:02:19 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:16:13.920 08:02:19 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:16:13.920 08:02:19 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:16:13.920 08:02:19 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:16:13.920 08:02:19 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:16:13.920 08:02:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:13.920 08:02:19 -- common/autotest_common.sh@10 -- # set +x 00:16:13.920 08:02:19 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:16:13.920 08:02:19 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:16:13.920 08:02:19 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:16:13.920 08:02:19 -- common/autotest_common.sh@10 -- # set +x 00:16:14.858 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:16:14.858 Waiting for block devices as requested 00:16:15.117 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:16:15.376 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:16:15.376 Cleaning 00:16:15.376 Removing: /var/run/dpdk/spdk0/config 00:16:15.376 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:16:15.376 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:16:15.376 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:16:15.376 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:16:15.376 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:16:15.376 Removing: /var/run/dpdk/spdk0/hugepage_info 00:16:15.376 Removing: /dev/shm/spdk_tgt_trace.pid52069 00:16:15.376 Removing: /var/run/dpdk/spdk0 00:16:15.376 Removing: /var/run/dpdk/spdk_pid51880 00:16:15.376 Removing: /var/run/dpdk/spdk_pid52069 00:16:15.376 Removing: /var/run/dpdk/spdk_pid52358 00:16:15.376 Removing: /var/run/dpdk/spdk_pid52590 00:16:15.376 Removing: /var/run/dpdk/spdk_pid52764 00:16:15.376 Removing: /var/run/dpdk/spdk_pid52849 00:16:15.376 Removing: /var/run/dpdk/spdk_pid52934 00:16:15.376 Removing: /var/run/dpdk/spdk_pid53038 00:16:15.376 Removing: /var/run/dpdk/spdk_pid53128 00:16:15.376 Removing: /var/run/dpdk/spdk_pid53176 00:16:15.376 Removing: /var/run/dpdk/spdk_pid53219 00:16:15.376 Removing: /var/run/dpdk/spdk_pid53296 00:16:15.376 Removing: /var/run/dpdk/spdk_pid53439 00:16:15.376 Removing: /var/run/dpdk/spdk_pid53500 00:16:15.376 Removing: /var/run/dpdk/spdk_pid53560 00:16:15.376 Removing: /var/run/dpdk/spdk_pid53581 00:16:15.376 Removing: /var/run/dpdk/spdk_pid53659 00:16:15.376 Removing: /var/run/dpdk/spdk_pid53680 00:16:15.376 Removing: /var/run/dpdk/spdk_pid53766 00:16:15.376 Removing: /var/run/dpdk/spdk_pid53787 00:16:15.376 Removing: /var/run/dpdk/spdk_pid53849 00:16:15.376 Removing: /var/run/dpdk/spdk_pid53866 00:16:15.376 Removing: /var/run/dpdk/spdk_pid53911 00:16:15.376 Removing: /var/run/dpdk/spdk_pid53934 00:16:15.376 Removing: /var/run/dpdk/spdk_pid54082 00:16:15.376 Removing: /var/run/dpdk/spdk_pid54120 00:16:15.376 Removing: /var/run/dpdk/spdk_pid54161 00:16:15.376 Removing: /var/run/dpdk/spdk_pid54250 00:16:15.376 Removing: /var/run/dpdk/spdk_pid54317 00:16:15.376 Removing: /var/run/dpdk/spdk_pid54349 00:16:15.376 Removing: /var/run/dpdk/spdk_pid54425 00:16:15.376 Removing: /var/run/dpdk/spdk_pid54459 00:16:15.376 Removing: /var/run/dpdk/spdk_pid54494 00:16:15.376 Removing: /var/run/dpdk/spdk_pid54528 00:16:15.376 Removing: /var/run/dpdk/spdk_pid54563 00:16:15.635 Removing: /var/run/dpdk/spdk_pid54599 00:16:15.635 Removing: /var/run/dpdk/spdk_pid54639 00:16:15.635 Removing: /var/run/dpdk/spdk_pid54667 00:16:15.635 Removing: /var/run/dpdk/spdk_pid54710 00:16:15.635 Removing: /var/run/dpdk/spdk_pid54732 00:16:15.635 Removing: /var/run/dpdk/spdk_pid54779 00:16:15.635 Removing: /var/run/dpdk/spdk_pid54801 00:16:15.635 Removing: /var/run/dpdk/spdk_pid54852 00:16:15.635 Removing: /var/run/dpdk/spdk_pid54874 00:16:15.635 Removing: /var/run/dpdk/spdk_pid54921 00:16:15.635 Removing: /var/run/dpdk/spdk_pid54943 00:16:15.635 Removing: /var/run/dpdk/spdk_pid54991 00:16:15.635 Removing: /var/run/dpdk/spdk_pid55012 00:16:15.635 Removing: /var/run/dpdk/spdk_pid55054 00:16:15.635 Removing: /var/run/dpdk/spdk_pid55083 00:16:15.635 Removing: /var/run/dpdk/spdk_pid55125 00:16:15.635 Removing: /var/run/dpdk/spdk_pid55152 00:16:15.635 Removing: /var/run/dpdk/spdk_pid55194 00:16:15.635 Removing: /var/run/dpdk/spdk_pid55216 00:16:15.635 Removing: /var/run/dpdk/spdk_pid55263 00:16:15.635 Removing: /var/run/dpdk/spdk_pid55290 00:16:15.635 Removing: /var/run/dpdk/spdk_pid55337 00:16:15.635 Removing: /var/run/dpdk/spdk_pid55408 00:16:15.635 Removing: /var/run/dpdk/spdk_pid55525 00:16:15.635 Removing: /var/run/dpdk/spdk_pid55696 00:16:15.635 Removing: /var/run/dpdk/spdk_pid55750 00:16:15.635 Removing: /var/run/dpdk/spdk_pid55788 00:16:15.635 Removing: /var/run/dpdk/spdk_pid55898 00:16:15.635 Removing: /var/run/dpdk/spdk_pid56103 00:16:15.635 Removing: /var/run/dpdk/spdk_pid56280 00:16:15.635 Removing: /var/run/dpdk/spdk_pid56380 00:16:15.635 Removing: /var/run/dpdk/spdk_pid56488 00:16:15.635 Removing: /var/run/dpdk/spdk_pid56538 00:16:15.635 Removing: /var/run/dpdk/spdk_pid56560 00:16:15.635 Removing: /var/run/dpdk/spdk_pid56589 00:16:15.635 Removing: /var/run/dpdk/spdk_pid57065 00:16:15.635 Removing: /var/run/dpdk/spdk_pid57142 00:16:15.635 Removing: /var/run/dpdk/spdk_pid57247 00:16:15.635 Removing: /var/run/dpdk/spdk_pid57291 00:16:15.635 Removing: /var/run/dpdk/spdk_pid58143 00:16:15.635 Removing: /var/run/dpdk/spdk_pid58987 00:16:15.635 Removing: /var/run/dpdk/spdk_pid59822 00:16:15.635 Removing: /var/run/dpdk/spdk_pid60866 00:16:15.635 Removing: /var/run/dpdk/spdk_pid61860 00:16:15.635 Removing: /var/run/dpdk/spdk_pid62857 00:16:15.635 Removing: /var/run/dpdk/spdk_pid64234 00:16:15.635 Removing: /var/run/dpdk/spdk_pid65351 00:16:15.635 Removing: /var/run/dpdk/spdk_pid66468 00:16:15.635 Removing: /var/run/dpdk/spdk_pid67161 00:16:15.635 Removing: /var/run/dpdk/spdk_pid67200 00:16:15.635 Removing: /var/run/dpdk/spdk_pid67256 00:16:15.635 Removing: /var/run/dpdk/spdk_pid67302 00:16:15.635 Removing: /var/run/dpdk/spdk_pid67423 00:16:15.635 Removing: /var/run/dpdk/spdk_pid67568 00:16:15.635 Removing: /var/run/dpdk/spdk_pid67779 00:16:15.635 Removing: /var/run/dpdk/spdk_pid68018 00:16:15.635 Removing: /var/run/dpdk/spdk_pid68042 00:16:15.635 Removing: /var/run/dpdk/spdk_pid68081 00:16:15.635 Removing: /var/run/dpdk/spdk_pid68103 00:16:15.635 Removing: /var/run/dpdk/spdk_pid68112 00:16:15.635 Removing: /var/run/dpdk/spdk_pid68143 00:16:15.635 Removing: /var/run/dpdk/spdk_pid68156 00:16:15.635 Removing: /var/run/dpdk/spdk_pid68176 00:16:15.635 Removing: /var/run/dpdk/spdk_pid68197 00:16:15.635 Removing: /var/run/dpdk/spdk_pid68212 00:16:15.635 Removing: /var/run/dpdk/spdk_pid68226 00:16:15.635 Removing: /var/run/dpdk/spdk_pid68253 00:16:15.635 Removing: /var/run/dpdk/spdk_pid68261 00:16:15.635 Removing: /var/run/dpdk/spdk_pid68282 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68302 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68321 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68331 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68363 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68376 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68392 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68432 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68446 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68479 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68557 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68582 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68603 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68639 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68648 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68664 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68710 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68728 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68764 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68775 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68790 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68799 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68812 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68828 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68834 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68851 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68877 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68918 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68932 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68965 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68979 00:16:15.636 Removing: /var/run/dpdk/spdk_pid68989 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69046 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69065 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69089 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69110 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69115 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69132 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69144 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69154 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69166 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69176 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69269 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69302 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69405 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69428 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69466 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69512 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69537 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69555 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69577 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69614 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69631 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69707 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69760 00:16:15.636 Removing: /var/run/dpdk/spdk_pid69810 00:16:15.636 Removing: /var/run/dpdk/spdk_pid70070 00:16:15.636 Removing: /var/run/dpdk/spdk_pid70179 00:16:15.636 Removing: /var/run/dpdk/spdk_pid70220 00:16:15.636 Clean 00:16:15.895 killing process with pid 43376 00:16:15.895 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: 43376 Terminated "$rootdir/scripts/perf/pm/collect-cpu-load" -d "$output_dir/power" > /dev/null (wd: /home/vagrant/spdk_repo) 00:16:15.895 killing process with pid 43377 00:16:15.895 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: 43377 Terminated "$rootdir/scripts/perf/pm/collect-vmstat" -d "$output_dir/power" > /dev/null (wd: /home/vagrant/spdk_repo) 00:16:15.895 08:02:21 -- common/autotest_common.sh@1436 -- # return 0 00:16:15.895 08:02:21 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:16:15.895 08:02:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:15.895 08:02:21 -- common/autotest_common.sh@10 -- # set +x 00:16:15.895 08:02:21 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:16:15.895 08:02:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:15.895 08:02:21 -- common/autotest_common.sh@10 -- # set +x 00:16:15.895 08:02:21 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:16:15.895 08:02:21 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:16:15.895 08:02:21 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:16:15.895 08:02:21 -- spdk/autotest.sh@394 -- # hash lcov 00:16:15.895 08:02:21 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:16:15.895 08:02:21 -- spdk/autotest.sh@396 -- # hostname 00:16:15.895 08:02:21 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t centos7-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:16:16.154 geninfo: WARNING: invalid characters removed from testname! 00:17:02.819 08:03:07 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:07.005 08:03:11 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:08.905 08:03:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:11.436 08:03:17 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:13.970 08:03:19 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:16.506 08:03:22 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:19.793 08:03:25 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:17:19.793 08:03:25 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:19.793 08:03:25 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:17:19.793 08:03:25 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.793 08:03:25 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.793 08:03:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:17:19.793 08:03:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:17:19.793 08:03:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:17:19.793 08:03:25 -- paths/export.sh@5 -- $ export PATH 00:17:19.793 08:03:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:17:19.793 08:03:25 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:17:19.793 08:03:25 -- common/autobuild_common.sh@435 -- $ date +%s 00:17:19.793 08:03:25 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720857805.XXXXXX 00:17:19.793 08:03:25 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720857805.FwaCOs 00:17:19.793 08:03:25 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:17:19.793 08:03:25 -- common/autobuild_common.sh@441 -- $ '[' -n v22.11.4 ']' 00:17:19.793 08:03:25 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:17:19.793 08:03:25 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:17:19.793 08:03:25 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:17:19.793 08:03:25 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:17:19.793 08:03:25 -- common/autobuild_common.sh@451 -- $ get_config_params 00:17:19.793 08:03:25 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:17:19.793 08:03:25 -- common/autotest_common.sh@10 -- $ set +x 00:17:19.793 08:03:25 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-asan --enable-coverage --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-daos' 00:17:19.793 08:03:25 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:17:19.793 08:03:25 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:17:19.793 08:03:25 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:17:19.793 08:03:25 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:17:19.793 08:03:25 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:17:19.793 08:03:25 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:17:19.793 08:03:25 -- common/autotest_common.sh@712 -- $ xtrace_disable 00:17:19.793 08:03:25 -- common/autotest_common.sh@10 -- $ set +x 00:17:19.793 08:03:25 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:17:19.793 08:03:25 -- spdk/autopackage.sh@36 -- $ [[ -n v22.11.4 ]] 00:17:19.793 08:03:25 -- spdk/autopackage.sh@36 -- $ [[ -e /tmp/spdk-ld-path ]] 00:17:19.793 08:03:25 -- spdk/autopackage.sh@37 -- $ source /tmp/spdk-ld-path 00:17:19.793 08:03:25 -- tmp/spdk-ld-path@1 -- $ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:17:19.793 08:03:25 -- tmp/spdk-ld-path@1 -- $ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:17:19.793 08:03:25 -- tmp/spdk-ld-path@2 -- $ export PKG_CONFIG_PATH= 00:17:19.793 08:03:25 -- tmp/spdk-ld-path@2 -- $ PKG_CONFIG_PATH= 00:17:19.793 08:03:25 -- spdk/autopackage.sh@40 -- $ get_config_params 00:17:19.793 08:03:25 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:17:19.793 08:03:25 -- common/autotest_common.sh@10 -- $ set +x 00:17:19.793 08:03:25 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:17:19.793 08:03:25 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-asan --enable-coverage --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-daos' 00:17:19.793 08:03:25 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-asan --enable-coverage --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-daos --enable-lto 00:17:19.793 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:17:19.793 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:17:19.793 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:17:19.793 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:17:19.793 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:17:19.793 Using 'verbs' RDMA provider 00:17:20.395 WARNING: ISA-L & DPDK crypto cannot be used as nasm ver must be 2.14 or newer. 00:17:20.395 Without ISA-L, there is no software support for crypto or compression, 00:17:20.395 so these features will be disabled. 00:17:20.653 Creating mk/config.mk...done. 00:17:20.653 Creating mk/cc.flags.mk...done. 00:17:20.653 Type 'make' to build. 00:17:20.653 08:03:26 -- spdk/autopackage.sh@43 -- $ make -j10 00:17:20.911 make[1]: Nothing to be done for 'all'. 00:17:20.911 CC lib/ut/ut.o 00:17:20.911 CC lib/ut_mock/mock.o 00:17:20.911 CC lib/log/log.o 00:17:20.911 CC lib/log/log_flags.o 00:17:20.912 CC lib/log/log_deprecated.o 00:17:21.170 LIB libspdk_ut_mock.a 00:17:21.170 LIB libspdk_ut.a 00:17:21.170 LIB libspdk_log.a 00:17:21.170 CC lib/dma/dma.o 00:17:21.170 CC lib/ioat/ioat.o 00:17:21.170 CXX lib/trace_parser/trace.o 00:17:21.170 CC lib/util/base64.o 00:17:21.170 CC lib/util/bit_array.o 00:17:21.170 CC lib/util/cpuset.o 00:17:21.170 CC lib/util/crc16.o 00:17:21.170 CC lib/util/crc32.o 00:17:21.170 CC lib/util/crc32c.o 00:17:21.170 CC lib/vfio_user/host/vfio_user_pci.o 00:17:21.428 LIB libspdk_ioat.a 00:17:21.428 CC lib/util/crc32_ieee.o 00:17:21.428 CC lib/vfio_user/host/vfio_user.o 00:17:21.428 CC lib/util/crc64.o 00:17:21.428 CC lib/util/dif.o 00:17:21.428 LIB libspdk_dma.a 00:17:21.428 CC lib/util/fd.o 00:17:21.428 CC lib/util/file.o 00:17:21.428 CC lib/util/hexlify.o 00:17:21.428 CC lib/util/iov.o 00:17:21.428 LIB libspdk_vfio_user.a 00:17:21.428 CC lib/util/math.o 00:17:21.686 CC lib/util/pipe.o 00:17:21.687 CC lib/util/strerror_tls.o 00:17:21.687 CC lib/util/string.o 00:17:21.687 CC lib/util/uuid.o 00:17:21.687 CC lib/util/fd_group.o 00:17:21.687 CC lib/util/xor.o 00:17:21.687 CC lib/util/zipf.o 00:17:21.687 LIB libspdk_trace_parser.a 00:17:21.687 LIB libspdk_util.a 00:17:21.945 CC lib/json/json_parse.o 00:17:21.945 CC lib/rdma/common.o 00:17:21.945 CC lib/env_dpdk/env.o 00:17:21.945 CC lib/vmd/vmd.o 00:17:21.945 CC lib/idxd/idxd.o 00:17:21.945 CC lib/json/json_util.o 00:17:21.945 CC lib/rdma/rdma_verbs.o 00:17:21.945 CC lib/env_dpdk/memory.o 00:17:21.945 CC lib/conf/conf.o 00:17:21.945 CC lib/vmd/led.o 00:17:21.945 CC lib/env_dpdk/pci.o 00:17:21.945 CC lib/json/json_write.o 00:17:21.945 LIB libspdk_conf.a 00:17:21.945 CC lib/env_dpdk/init.o 00:17:21.945 CC lib/env_dpdk/threads.o 00:17:21.945 CC lib/idxd/idxd_user.o 00:17:21.945 LIB libspdk_rdma.a 00:17:22.205 CC lib/env_dpdk/pci_ioat.o 00:17:22.205 LIB libspdk_vmd.a 00:17:22.205 CC lib/env_dpdk/pci_virtio.o 00:17:22.205 CC lib/env_dpdk/pci_vmd.o 00:17:22.205 CC lib/env_dpdk/pci_idxd.o 00:17:22.205 CC lib/env_dpdk/pci_event.o 00:17:22.205 LIB libspdk_idxd.a 00:17:22.205 CC lib/env_dpdk/sigbus_handler.o 00:17:22.205 LIB libspdk_json.a 00:17:22.205 CC lib/env_dpdk/pci_dpdk.o 00:17:22.205 CC lib/env_dpdk/pci_dpdk_2207.o 00:17:22.205 CC lib/env_dpdk/pci_dpdk_2211.o 00:17:22.205 CC lib/jsonrpc/jsonrpc_server.o 00:17:22.205 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:17:22.205 CC lib/jsonrpc/jsonrpc_client.o 00:17:22.205 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:17:22.465 LIB libspdk_jsonrpc.a 00:17:22.465 LIB libspdk_env_dpdk.a 00:17:22.465 CC lib/rpc/rpc.o 00:17:22.724 LIB libspdk_rpc.a 00:17:22.724 CC lib/trace/trace.o 00:17:22.724 CC lib/trace/trace_flags.o 00:17:22.724 CC lib/notify/notify.o 00:17:22.724 CC lib/sock/sock.o 00:17:22.724 CC lib/notify/notify_rpc.o 00:17:22.724 CC lib/sock/sock_rpc.o 00:17:22.724 CC lib/trace/trace_rpc.o 00:17:22.984 LIB libspdk_notify.a 00:17:22.984 LIB libspdk_trace.a 00:17:22.984 LIB libspdk_sock.a 00:17:23.247 CC lib/thread/thread.o 00:17:23.247 CC lib/thread/iobuf.o 00:17:23.247 CC lib/nvme/nvme_ctrlr_cmd.o 00:17:23.247 CC lib/nvme/nvme_ctrlr.o 00:17:23.247 CC lib/nvme/nvme_fabric.o 00:17:23.247 CC lib/nvme/nvme_ns_cmd.o 00:17:23.247 CC lib/nvme/nvme_ns.o 00:17:23.247 CC lib/nvme/nvme_pcie_common.o 00:17:23.247 CC lib/nvme/nvme_pcie.o 00:17:23.247 CC lib/nvme/nvme_qpair.o 00:17:23.247 CC lib/nvme/nvme.o 00:17:23.524 LIB libspdk_thread.a 00:17:23.524 CC lib/accel/accel.o 00:17:23.524 CC lib/nvme/nvme_quirks.o 00:17:23.524 CC lib/nvme/nvme_transport.o 00:17:23.524 CC lib/nvme/nvme_discovery.o 00:17:23.524 CC lib/accel/accel_rpc.o 00:17:23.524 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:17:23.815 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:17:23.815 CC lib/nvme/nvme_tcp.o 00:17:23.815 CC lib/accel/accel_sw.o 00:17:23.815 CC lib/nvme/nvme_opal.o 00:17:23.815 LIB libspdk_accel.a 00:17:23.815 CC lib/nvme/nvme_io_msg.o 00:17:23.815 CC lib/nvme/nvme_poll_group.o 00:17:23.815 CC lib/nvme/nvme_zns.o 00:17:24.074 CC lib/blob/blobstore.o 00:17:24.074 CC lib/nvme/nvme_cuse.o 00:17:24.074 CC lib/init/json_config.o 00:17:24.074 CC lib/nvme/nvme_vfio_user.o 00:17:24.074 CC lib/nvme/nvme_rdma.o 00:17:24.074 CC lib/blob/request.o 00:17:24.074 CC lib/init/subsystem.o 00:17:24.074 CC lib/blob/zeroes.o 00:17:24.333 CC lib/init/subsystem_rpc.o 00:17:24.333 CC lib/init/rpc.o 00:17:24.333 CC lib/blob/blob_bs_dev.o 00:17:24.333 CC lib/virtio/virtio.o 00:17:24.333 CC lib/virtio/virtio_vhost_user.o 00:17:24.333 CC lib/virtio/virtio_vfio_user.o 00:17:24.333 CC lib/virtio/virtio_pci.o 00:17:24.333 LIB libspdk_init.a 00:17:24.333 CC lib/event/app.o 00:17:24.333 CC lib/event/reactor.o 00:17:24.591 CC lib/bdev/bdev.o 00:17:24.591 CC lib/event/log_rpc.o 00:17:24.591 CC lib/bdev/bdev_rpc.o 00:17:24.591 CC lib/event/app_rpc.o 00:17:24.591 CC lib/bdev/bdev_zone.o 00:17:24.591 LIB libspdk_virtio.a 00:17:24.591 CC lib/event/scheduler_static.o 00:17:24.591 CC lib/bdev/part.o 00:17:24.591 LIB libspdk_blob.a 00:17:24.591 CC lib/bdev/scsi_nvme.o 00:17:24.591 LIB libspdk_nvme.a 00:17:24.591 LIB libspdk_event.a 00:17:24.591 CC lib/lvol/lvol.o 00:17:24.591 CC lib/blobfs/blobfs.o 00:17:24.591 CC lib/blobfs/tree.o 00:17:24.850 LIB libspdk_blobfs.a 00:17:24.850 LIB libspdk_lvol.a 00:17:25.108 LIB libspdk_bdev.a 00:17:25.449 CC lib/scsi/dev.o 00:17:25.449 CC lib/nvmf/ctrlr.o 00:17:25.449 CC lib/nbd/nbd.o 00:17:25.449 CC lib/scsi/lun.o 00:17:25.449 CC lib/nvmf/ctrlr_discovery.o 00:17:25.449 CC lib/ftl/ftl_core.o 00:17:25.449 CC lib/nbd/nbd_rpc.o 00:17:25.449 CC lib/scsi/port.o 00:17:25.449 CC lib/nvmf/ctrlr_bdev.o 00:17:25.449 CC lib/ftl/ftl_init.o 00:17:25.449 CC lib/ftl/ftl_layout.o 00:17:25.449 CC lib/scsi/scsi.o 00:17:25.449 CC lib/scsi/scsi_bdev.o 00:17:25.449 LIB libspdk_nbd.a 00:17:25.449 CC lib/ftl/ftl_debug.o 00:17:25.449 CC lib/scsi/scsi_pr.o 00:17:25.449 CC lib/ftl/ftl_io.o 00:17:25.449 CC lib/nvmf/subsystem.o 00:17:25.449 CC lib/nvmf/nvmf.o 00:17:25.449 CC lib/ftl/ftl_sb.o 00:17:25.449 CC lib/nvmf/nvmf_rpc.o 00:17:25.707 CC lib/ftl/ftl_l2p.o 00:17:25.707 CC lib/ftl/ftl_l2p_flat.o 00:17:25.707 CC lib/ftl/ftl_nv_cache.o 00:17:25.707 CC lib/scsi/scsi_rpc.o 00:17:25.707 CC lib/nvmf/transport.o 00:17:25.707 CC lib/scsi/task.o 00:17:25.707 CC lib/nvmf/tcp.o 00:17:25.707 CC lib/ftl/ftl_band.o 00:17:25.707 CC lib/nvmf/rdma.o 00:17:25.707 CC lib/ftl/ftl_band_ops.o 00:17:25.707 CC lib/ftl/ftl_writer.o 00:17:25.707 LIB libspdk_scsi.a 00:17:25.707 CC lib/ftl/ftl_rq.o 00:17:25.707 CC lib/ftl/ftl_reloc.o 00:17:25.965 CC lib/ftl/ftl_l2p_cache.o 00:17:25.965 CC lib/ftl/ftl_p2l.o 00:17:25.965 CC lib/ftl/mngt/ftl_mngt.o 00:17:25.965 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:17:25.965 CC lib/vhost/vhost.o 00:17:25.965 CC lib/iscsi/conn.o 00:17:25.965 CC lib/vhost/vhost_rpc.o 00:17:25.965 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:17:25.965 CC lib/vhost/vhost_scsi.o 00:17:25.965 CC lib/ftl/mngt/ftl_mngt_startup.o 00:17:25.965 CC lib/vhost/vhost_blk.o 00:17:25.965 CC lib/vhost/rte_vhost_user.o 00:17:26.224 CC lib/ftl/mngt/ftl_mngt_md.o 00:17:26.224 CC lib/iscsi/init_grp.o 00:17:26.224 CC lib/ftl/mngt/ftl_mngt_misc.o 00:17:26.224 LIB libspdk_nvmf.a 00:17:26.224 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:17:26.224 CC lib/iscsi/iscsi.o 00:17:26.224 CC lib/iscsi/md5.o 00:17:26.224 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:17:26.224 CC lib/iscsi/param.o 00:17:26.224 CC lib/ftl/mngt/ftl_mngt_band.o 00:17:26.224 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:17:26.224 CC lib/iscsi/portal_grp.o 00:17:26.483 CC lib/iscsi/tgt_node.o 00:17:26.483 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:17:26.483 CC lib/iscsi/iscsi_subsystem.o 00:17:26.483 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:17:26.483 CC lib/iscsi/iscsi_rpc.o 00:17:26.483 CC lib/iscsi/task.o 00:17:26.483 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:17:26.483 CC lib/ftl/utils/ftl_conf.o 00:17:26.483 CC lib/ftl/utils/ftl_md.o 00:17:26.483 LIB libspdk_vhost.a 00:17:26.483 CC lib/ftl/utils/ftl_mempool.o 00:17:26.483 CC lib/ftl/utils/ftl_bitmap.o 00:17:26.741 CC lib/ftl/utils/ftl_property.o 00:17:26.741 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:17:26.741 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:17:26.741 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:17:26.741 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:17:26.741 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:17:26.741 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:17:26.741 LIB libspdk_iscsi.a 00:17:26.741 CC lib/ftl/upgrade/ftl_sb_v3.o 00:17:26.741 CC lib/ftl/upgrade/ftl_sb_v5.o 00:17:26.741 CC lib/ftl/nvc/ftl_nvc_dev.o 00:17:26.741 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:17:26.741 CC lib/ftl/base/ftl_base_dev.o 00:17:26.741 CC lib/ftl/base/ftl_base_bdev.o 00:17:26.998 LIB libspdk_ftl.a 00:17:27.256 CC module/env_dpdk/env_dpdk_rpc.o 00:17:27.256 CC module/scheduler/dynamic/scheduler_dynamic.o 00:17:27.256 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:17:27.256 CC module/scheduler/gscheduler/gscheduler.o 00:17:27.256 CC module/accel/dsa/accel_dsa.o 00:17:27.256 CC module/blob/bdev/blob_bdev.o 00:17:27.256 CC module/accel/iaa/accel_iaa.o 00:17:27.256 CC module/accel/ioat/accel_ioat.o 00:17:27.256 CC module/sock/posix/posix.o 00:17:27.256 CC module/accel/error/accel_error.o 00:17:27.256 LIB libspdk_env_dpdk_rpc.a 00:17:27.256 LIB libspdk_scheduler_gscheduler.a 00:17:27.256 LIB libspdk_scheduler_dpdk_governor.a 00:17:27.256 LIB libspdk_scheduler_dynamic.a 00:17:27.256 CC module/accel/error/accel_error_rpc.o 00:17:27.256 CC module/accel/ioat/accel_ioat_rpc.o 00:17:27.256 CC module/accel/dsa/accel_dsa_rpc.o 00:17:27.256 CC module/accel/iaa/accel_iaa_rpc.o 00:17:27.256 LIB libspdk_blob_bdev.a 00:17:27.256 LIB libspdk_accel_ioat.a 00:17:27.514 LIB libspdk_accel_iaa.a 00:17:27.514 LIB libspdk_accel_dsa.a 00:17:27.514 LIB libspdk_accel_error.a 00:17:27.514 CC module/bdev/error/vbdev_error.o 00:17:27.514 CC module/blobfs/bdev/blobfs_bdev.o 00:17:27.514 CC module/bdev/lvol/vbdev_lvol.o 00:17:27.514 CC module/bdev/delay/vbdev_delay.o 00:17:27.514 CC module/bdev/gpt/gpt.o 00:17:27.514 LIB libspdk_sock_posix.a 00:17:27.514 CC module/bdev/malloc/bdev_malloc.o 00:17:27.514 CC module/bdev/null/bdev_null.o 00:17:27.514 CC module/bdev/passthru/vbdev_passthru.o 00:17:27.514 CC module/bdev/null/bdev_null_rpc.o 00:17:27.514 CC module/bdev/nvme/bdev_nvme.o 00:17:27.514 CC module/bdev/gpt/vbdev_gpt.o 00:17:27.514 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:17:27.514 CC module/bdev/error/vbdev_error_rpc.o 00:17:27.514 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:17:27.514 LIB libspdk_bdev_null.a 00:17:27.514 CC module/bdev/delay/vbdev_delay_rpc.o 00:17:27.772 CC module/bdev/malloc/bdev_malloc_rpc.o 00:17:27.772 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:17:27.772 LIB libspdk_blobfs_bdev.a 00:17:27.772 LIB libspdk_bdev_error.a 00:17:27.772 CC module/bdev/raid/bdev_raid.o 00:17:27.772 LIB libspdk_bdev_gpt.a 00:17:27.772 LIB libspdk_bdev_passthru.a 00:17:27.772 LIB libspdk_bdev_delay.a 00:17:27.772 CC module/bdev/split/vbdev_split.o 00:17:27.772 LIB libspdk_bdev_malloc.a 00:17:27.772 CC module/bdev/split/vbdev_split_rpc.o 00:17:27.772 CC module/bdev/zone_block/vbdev_zone_block.o 00:17:27.772 CC module/bdev/aio/bdev_aio.o 00:17:27.772 CC module/bdev/virtio/bdev_virtio_scsi.o 00:17:27.772 CC module/bdev/ftl/bdev_ftl.o 00:17:27.772 LIB libspdk_bdev_lvol.a 00:17:27.772 CC module/bdev/daos/bdev_daos.o 00:17:28.031 CC module/bdev/daos/bdev_daos_rpc.o 00:17:28.031 CC module/bdev/ftl/bdev_ftl_rpc.o 00:17:28.031 LIB libspdk_bdev_split.a 00:17:28.031 CC module/bdev/virtio/bdev_virtio_blk.o 00:17:28.031 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:17:28.031 CC module/bdev/aio/bdev_aio_rpc.o 00:17:28.031 CC module/bdev/raid/bdev_raid_rpc.o 00:17:28.031 CC module/bdev/virtio/bdev_virtio_rpc.o 00:17:28.031 CC module/bdev/nvme/bdev_nvme_rpc.o 00:17:28.031 LIB libspdk_bdev_daos.a 00:17:28.031 LIB libspdk_bdev_ftl.a 00:17:28.031 CC module/bdev/nvme/nvme_rpc.o 00:17:28.031 CC module/bdev/raid/bdev_raid_sb.o 00:17:28.031 CC module/bdev/nvme/bdev_mdns_client.o 00:17:28.031 LIB libspdk_bdev_zone_block.a 00:17:28.031 LIB libspdk_bdev_aio.a 00:17:28.031 CC module/bdev/raid/raid0.o 00:17:28.031 CC module/bdev/nvme/vbdev_opal.o 00:17:28.031 CC module/bdev/raid/raid1.o 00:17:28.289 LIB libspdk_bdev_virtio.a 00:17:28.289 CC module/bdev/raid/concat.o 00:17:28.289 CC module/bdev/nvme/vbdev_opal_rpc.o 00:17:28.289 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:17:28.289 LIB libspdk_bdev_raid.a 00:17:28.289 LIB libspdk_bdev_nvme.a 00:17:28.547 CC module/event/subsystems/sock/sock.o 00:17:28.547 CC module/event/subsystems/scheduler/scheduler.o 00:17:28.547 CC module/event/subsystems/vmd/vmd.o 00:17:28.547 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:17:28.547 CC module/event/subsystems/iobuf/iobuf.o 00:17:28.547 CC module/event/subsystems/vmd/vmd_rpc.o 00:17:28.547 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:17:28.547 LIB libspdk_event_sock.a 00:17:28.547 LIB libspdk_event_vhost_blk.a 00:17:28.547 LIB libspdk_event_vmd.a 00:17:28.547 LIB libspdk_event_scheduler.a 00:17:28.547 LIB libspdk_event_iobuf.a 00:17:28.805 CC module/event/subsystems/accel/accel.o 00:17:28.805 LIB libspdk_event_accel.a 00:17:29.063 CC module/event/subsystems/bdev/bdev.o 00:17:29.063 LIB libspdk_event_bdev.a 00:17:29.322 CC module/event/subsystems/nbd/nbd.o 00:17:29.322 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:17:29.322 CC module/event/subsystems/scsi/scsi.o 00:17:29.322 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:17:29.322 LIB libspdk_event_nbd.a 00:17:29.322 LIB libspdk_event_scsi.a 00:17:29.580 LIB libspdk_event_nvmf.a 00:17:29.580 CC module/event/subsystems/iscsi/iscsi.o 00:17:29.580 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:17:29.580 LIB libspdk_event_vhost_scsi.a 00:17:29.839 LIB libspdk_event_iscsi.a 00:17:29.839 CXX app/trace/trace.o 00:17:29.839 CC examples/sock/hello_world/hello_sock.o 00:17:29.839 CC examples/nvme/hello_world/hello_world.o 00:17:29.839 CC examples/accel/perf/accel_perf.o 00:17:29.839 CC examples/ioat/perf/perf.o 00:17:29.839 CC examples/vmd/lsvmd/lsvmd.o 00:17:29.839 CC examples/blob/hello_world/hello_blob.o 00:17:29.839 CC examples/bdev/hello_world/hello_bdev.o 00:17:30.097 CC test/accel/dif/dif.o 00:17:30.097 CC examples/nvmf/nvmf/nvmf.o 00:17:30.097 LINK lsvmd 00:17:30.097 LINK hello_sock 00:17:30.097 LINK ioat_perf 00:17:30.097 LINK accel_perf 00:17:30.097 LINK hello_world 00:17:30.097 LINK spdk_trace 00:17:30.097 LINK hello_blob 00:17:30.097 LINK hello_bdev 00:17:30.356 LINK dif 00:17:30.356 LINK nvmf 00:17:38.478 CC app/trace_record/trace_record.o 00:17:38.478 CC examples/ioat/verify/verify.o 00:17:39.045 LINK spdk_trace_record 00:17:39.304 LINK verify 00:17:40.240 CC app/nvmf_tgt/nvmf_main.o 00:17:41.179 LINK nvmf_tgt 00:17:42.117 CC test/app/bdev_svc/bdev_svc.o 00:17:42.685 CC examples/vmd/led/led.o 00:17:42.944 LINK bdev_svc 00:17:43.512 LINK led 00:17:44.449 CC examples/nvme/reconnect/reconnect.o 00:17:45.016 CC examples/nvme/nvme_manage/nvme_manage.o 00:17:45.583 LINK reconnect 00:17:46.151 LINK nvme_manage 00:17:52.709 CC examples/bdev/bdevperf/bdevperf.o 00:17:54.609 LINK bdevperf 00:18:06.841 CC examples/blob/cli/blobcli.o 00:18:07.775 LINK blobcli 00:18:09.673 CC examples/util/zipf/zipf.o 00:18:10.606 LINK zipf 00:18:14.794 CC examples/nvme/arbitration/arbitration.o 00:18:16.170 LINK arbitration 00:18:17.105 CC app/iscsi_tgt/iscsi_tgt.o 00:18:18.039 LINK iscsi_tgt 00:18:26.162 CC app/spdk_tgt/spdk_tgt.o 00:18:26.420 LINK spdk_tgt 00:18:26.987 CC examples/thread/thread/thread_ex.o 00:18:27.245 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:18:28.178 LINK thread 00:18:28.744 LINK nvme_fuzz 00:18:35.308 CC examples/idxd/perf/perf.o 00:18:35.565 LINK idxd_perf 00:18:37.467 CC examples/nvme/hotplug/hotplug.o 00:18:38.841 LINK hotplug 00:18:47.010 CC test/app/histogram_perf/histogram_perf.o 00:18:47.942 LINK histogram_perf 00:18:51.225 CC examples/interrupt_tgt/interrupt_tgt.o 00:18:51.794 LINK interrupt_tgt 00:18:52.731 CC app/spdk_lspci/spdk_lspci.o 00:18:53.668 LINK spdk_lspci 00:18:53.926 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:18:56.459 LINK iscsi_fuzz 00:18:56.459 CC examples/nvme/cmb_copy/cmb_copy.o 00:18:57.394 LINK cmb_copy 00:18:59.968 CC examples/nvme/abort/abort.o 00:19:00.535 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:19:01.471 LINK abort 00:19:02.040 CC app/spdk_nvme_perf/perf.o 00:19:02.040 LINK pmr_persistence 00:19:04.576 LINK spdk_nvme_perf 00:19:14.547 CC app/spdk_nvme_identify/identify.o 00:19:15.924 LINK spdk_nvme_identify 00:19:15.924 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:19:16.491 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:19:17.449 LINK vhost_fuzz 00:19:20.732 CC test/app/stub/stub.o 00:19:20.733 CC test/app/jsoncat/jsoncat.o 00:19:20.990 CC test/bdev/bdevio/bdevio.o 00:19:21.248 LINK jsoncat 00:19:21.248 LINK stub 00:19:21.814 LINK bdevio 00:19:22.378 CC test/blobfs/mkfs/mkfs.o 00:19:22.378 TEST_HEADER include/spdk/config.h 00:19:22.378 CXX test/cpp_headers/rpc.o 00:19:22.944 LINK mkfs 00:19:22.944 CXX test/cpp_headers/vfio_user_spec.o 00:19:22.944 CC app/spdk_nvme_discover/discovery_aer.o 00:19:23.202 CXX test/cpp_headers/accel_module.o 00:19:23.461 CXX test/cpp_headers/bit_pool.o 00:19:23.720 LINK spdk_nvme_discover 00:19:23.720 CXX test/cpp_headers/ioat.o 00:19:24.288 CXX test/cpp_headers/blobfs.o 00:19:24.288 CXX test/cpp_headers/pipe.o 00:19:24.546 CXX test/cpp_headers/accel.o 00:19:24.804 CXX test/cpp_headers/version.o 00:19:25.063 CXX test/cpp_headers/trace_parser.o 00:19:25.321 CXX test/cpp_headers/opal_spec.o 00:19:25.580 CC app/spdk_top/spdk_top.o 00:19:25.839 CXX test/cpp_headers/uuid.o 00:19:26.407 CXX test/cpp_headers/bdev.o 00:19:26.666 LINK spdk_top 00:19:26.924 CXX test/cpp_headers/hexlify.o 00:19:27.193 CXX test/cpp_headers/likely.o 00:19:27.455 CXX test/cpp_headers/vhost.o 00:19:27.713 CXX test/cpp_headers/memory.o 00:19:28.282 CXX test/cpp_headers/vfio_user_pci.o 00:19:28.282 CXX test/cpp_headers/dma.o 00:19:28.282 CC test/dma/test_dma/test_dma.o 00:19:28.540 CXX test/cpp_headers/nbd.o 00:19:28.799 CXX test/cpp_headers/env.o 00:19:28.799 CXX test/cpp_headers/nvme_zns.o 00:19:29.058 CC app/vhost/vhost.o 00:19:29.342 LINK test_dma 00:19:29.342 CXX test/cpp_headers/env_dpdk.o 00:19:29.654 LINK vhost 00:19:29.654 CXX test/cpp_headers/init.o 00:19:30.254 CC app/spdk_dd/spdk_dd.o 00:19:30.254 CXX test/cpp_headers/fd_group.o 00:19:30.824 LINK spdk_dd 00:19:30.824 CXX test/cpp_headers/bdev_module.o 00:19:31.392 CXX test/cpp_headers/opal.o 00:19:31.651 CXX test/cpp_headers/event.o 00:19:32.219 CXX test/cpp_headers/base64.o 00:19:32.786 CXX test/cpp_headers/nvmf.o 00:19:33.354 CXX test/cpp_headers/nvmf_spec.o 00:19:33.354 CC app/fio/nvme/fio_plugin.o 00:19:33.921 CXX test/cpp_headers/blobfs_bdev.o 00:19:34.179 LINK spdk_nvme 00:19:34.179 CXX test/cpp_headers/fd.o 00:19:34.745 CXX test/cpp_headers/barrier.o 00:19:35.313 CXX test/cpp_headers/nvmf_fc_spec.o 00:19:35.880 CC test/event/event_perf/event_perf.o 00:19:36.139 CXX test/cpp_headers/zipf.o 00:19:36.139 CC test/event/reactor/reactor.o 00:19:36.398 CC test/env/mem_callbacks/mem_callbacks.o 00:19:36.657 LINK event_perf 00:19:36.917 CXX test/cpp_headers/scheduler.o 00:19:37.191 LINK reactor 00:19:37.451 LINK mem_callbacks 00:19:38.018 CXX test/cpp_headers/dif.o 00:19:38.956 CXX test/cpp_headers/scsi_spec.o 00:19:40.334 CXX test/cpp_headers/blob.o 00:19:41.270 CXX test/cpp_headers/cpuset.o 00:19:42.645 CXX test/cpp_headers/thread.o 00:19:43.581 CC test/env/vtophys/vtophys.o 00:19:43.581 CXX test/cpp_headers/tree.o 00:19:44.148 CXX test/cpp_headers/xor.o 00:19:44.715 LINK vtophys 00:19:45.282 CXX test/cpp_headers/assert.o 00:19:46.657 CXX test/cpp_headers/file.o 00:19:47.595 CXX test/cpp_headers/endian.o 00:19:48.968 CXX test/cpp_headers/notify.o 00:19:49.900 CXX test/cpp_headers/util.o 00:19:51.277 CXX test/cpp_headers/log.o 00:19:52.213 CXX test/cpp_headers/sock.o 00:19:53.148 CXX test/cpp_headers/nvme_ocssd_spec.o 00:19:53.408 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:19:53.976 CC test/event/reactor_perf/reactor_perf.o 00:19:53.976 CXX test/cpp_headers/config.o 00:19:54.236 LINK env_dpdk_post_init 00:19:54.495 CXX test/cpp_headers/histogram_data.o 00:19:54.755 LINK reactor_perf 00:19:55.693 CXX test/cpp_headers/nvme_intel.o 00:19:56.292 CXX test/cpp_headers/idxd_spec.o 00:19:57.232 CXX test/cpp_headers/crc16.o 00:19:58.607 CXX test/cpp_headers/bdev_zone.o 00:19:59.173 CXX test/cpp_headers/stdinc.o 00:19:59.431 CC test/event/app_repeat/app_repeat.o 00:20:00.366 CXX test/cpp_headers/vmd.o 00:20:00.366 LINK app_repeat 00:20:01.305 CXX test/cpp_headers/scsi.o 00:20:02.242 CXX test/cpp_headers/jsonrpc.o 00:20:03.179 CC app/fio/bdev/fio_plugin.o 00:20:03.179 CXX test/cpp_headers/blob_bdev.o 00:20:04.114 CXX test/cpp_headers/crc32.o 00:20:04.373 LINK spdk_bdev 00:20:04.939 CXX test/cpp_headers/nvmf_transport.o 00:20:05.871 CXX test/cpp_headers/idxd.o 00:20:06.805 CXX test/cpp_headers/crc64.o 00:20:07.739 CXX test/cpp_headers/nvme.o 00:20:07.739 CXX test/cpp_headers/iscsi_spec.o 00:20:08.673 CXX test/cpp_headers/queue.o 00:20:08.673 CXX test/cpp_headers/nvmf_cmd.o 00:20:08.930 CXX test/cpp_headers/lvol.o 00:20:09.495 CXX test/cpp_headers/ftl.o 00:20:10.062 CXX test/cpp_headers/trace.o 00:20:10.329 CC test/lvol/esnap/esnap.o 00:20:10.602 CC test/event/scheduler/scheduler.o 00:20:10.602 CXX test/cpp_headers/ioat_spec.o 00:20:11.171 LINK scheduler 00:20:11.431 CXX test/cpp_headers/conf.o 00:20:11.691 CC test/nvme/aer/aer.o 00:20:11.951 CC test/env/memory/memory_ut.o 00:20:12.210 CXX test/cpp_headers/ublk.o 00:20:12.470 LINK aer 00:20:12.730 CXX test/cpp_headers/bit_array.o 00:20:12.990 LINK memory_ut 00:20:13.249 CXX test/cpp_headers/pci_ids.o 00:20:13.818 CXX test/cpp_headers/nvme_spec.o 00:20:14.077 CXX test/cpp_headers/string.o 00:20:14.659 CXX test/cpp_headers/gpt_spec.o 00:20:14.920 LINK esnap 00:20:15.178 CXX test/cpp_headers/nvme_ocssd.o 00:20:15.743 CXX test/cpp_headers/json.o 00:20:16.002 CC test/env/pci/pci_ut.o 00:20:16.261 CXX test/cpp_headers/reduce.o 00:20:16.828 LINK pci_ut 00:20:16.828 CXX test/cpp_headers/mmio.o 00:20:17.396 CC test/nvme/reset/reset.o 00:20:18.331 CC test/rpc_client/rpc_client_test.o 00:20:18.590 LINK reset 00:20:19.159 LINK rpc_client_test 00:20:19.159 CC test/nvme/sgl/sgl.o 00:20:19.159 CC test/thread/poller_perf/poller_perf.o 00:20:20.097 LINK poller_perf 00:20:20.097 LINK sgl 00:20:21.035 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:20:21.603 LINK histogram_ut 00:20:22.172 CC test/nvme/e2edp/nvme_dp.o 00:20:23.169 LINK nvme_dp 00:20:25.074 CC test/thread/lock/spdk_lock.o 00:20:26.449 CC test/unit/lib/accel/accel.c/accel_ut.o 00:20:27.016 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:20:27.016 LINK spdk_lock 00:20:29.548 LINK accel_ut 00:20:32.084 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:20:32.342 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:20:32.342 LINK blob_bdev_ut 00:20:32.909 LINK bdev_ut 00:20:33.168 LINK tree_ut 00:20:36.458 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:20:36.458 CC test/nvme/overhead/overhead.o 00:20:37.024 LINK overhead 00:20:37.024 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:20:37.957 LINK blobfs_async_ut 00:20:37.957 CC test/unit/lib/bdev/part.c/part_ut.o 00:20:38.524 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:20:38.783 LINK blobfs_sync_ut 00:20:38.783 CC test/unit/lib/blob/blob.c/blob_ut.o 00:20:39.042 LINK scsi_nvme_ut 00:20:40.945 LINK part_ut 00:20:40.945 CC test/unit/lib/dma/dma.c/dma_ut.o 00:20:41.914 LINK dma_ut 00:20:43.308 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:20:43.308 LINK gpt_ut 00:20:43.567 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:20:44.505 LINK blob_ut 00:20:44.505 CC test/unit/lib/event/app.c/app_ut.o 00:20:44.505 LINK vbdev_lvol_ut 00:20:44.505 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:20:44.765 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:20:44.765 LINK blobfs_bdev_ut 00:20:44.765 LINK app_ut 00:20:45.026 LINK ioat_ut 00:20:45.026 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:20:45.285 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:20:45.544 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:20:45.544 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:20:45.804 LINK bdev_raid_sb_ut 00:20:46.063 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:20:46.063 LINK reactor_ut 00:20:46.322 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:20:46.322 LINK bdev_zone_ut 00:20:46.580 CC test/nvme/err_injection/err_injection.o 00:20:46.580 LINK bdev_ut 00:20:46.838 LINK err_injection 00:20:46.838 LINK conn_ut 00:20:46.838 LINK bdev_raid_ut 00:20:46.838 CC test/nvme/startup/startup.o 00:20:47.097 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:20:47.097 LINK startup 00:20:47.354 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:20:47.920 LINK vbdev_zone_block_ut 00:20:47.920 LINK concat_ut 00:20:48.486 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:20:51.019 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:20:51.278 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:20:51.537 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:20:51.537 LINK bdev_nvme_ut 00:20:51.796 LINK init_grp_ut 00:20:51.796 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:20:51.796 LINK jsonrpc_server_ut 00:20:52.054 CC test/unit/lib/log/log.c/log_ut.o 00:20:52.313 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:20:52.313 LINK log_ut 00:20:52.313 LINK raid1_ut 00:20:52.571 LINK json_parse_ut 00:20:53.138 CC test/unit/lib/notify/notify.c/notify_ut.o 00:20:53.397 LINK lvol_ut 00:20:53.397 LINK notify_ut 00:20:53.655 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:20:54.589 CC test/nvme/reserve/reserve.o 00:20:54.847 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:20:54.847 CC test/nvme/simple_copy/simple_copy.o 00:20:54.847 LINK reserve 00:20:55.106 LINK iscsi_ut 00:20:55.106 LINK simple_copy 00:20:55.391 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:20:55.391 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:20:55.650 LINK nvme_ut 00:20:55.908 LINK dev_ut 00:20:56.167 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:20:57.101 LINK lun_ut 00:20:57.101 CC test/unit/lib/sock/sock.c/sock_ut.o 00:20:57.666 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:20:57.923 LINK tcp_ut 00:20:58.180 LINK scsi_ut 00:20:58.439 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:20:58.697 LINK sock_ut 00:20:59.264 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:21:00.199 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:21:00.457 LINK ctrlr_ut 00:21:01.393 LINK nvme_ctrlr_ut 00:21:01.393 LINK nvme_ctrlr_cmd_ut 00:21:01.393 CC test/unit/lib/iscsi/param.c/param_ut.o 00:21:01.651 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:21:01.911 CC test/nvme/connect_stress/connect_stress.o 00:21:01.911 LINK param_ut 00:21:02.243 LINK connect_stress 00:21:02.537 CC test/unit/lib/sock/posix.c/posix_ut.o 00:21:02.537 LINK scsi_bdev_ut 00:21:03.106 LINK posix_ut 00:21:03.106 CC test/nvme/boot_partition/boot_partition.o 00:21:03.365 LINK boot_partition 00:21:04.744 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:21:04.744 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:21:04.744 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:21:05.312 LINK portal_grp_ut 00:21:05.572 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:21:05.831 LINK nvme_ctrlr_ocssd_cmd_ut 00:21:05.831 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:21:06.090 LINK scsi_pr_ut 00:21:06.657 LINK subsystem_ut 00:21:06.916 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:21:06.916 LINK ctrlr_discovery_ut 00:21:07.183 CC test/unit/lib/thread/thread.c/thread_ut.o 00:21:07.183 CC test/unit/lib/util/base64.c/base64_ut.o 00:21:07.442 LINK json_util_ut 00:21:07.442 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:21:07.442 LINK base64_ut 00:21:08.378 LINK thread_ut 00:21:08.378 LINK nvme_ns_ut 00:21:08.378 CC test/nvme/compliance/nvme_compliance.o 00:21:08.378 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:21:08.378 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:21:08.636 LINK nvme_compliance 00:21:08.636 LINK pci_event_ut 00:21:08.636 LINK tgt_node_ut 00:21:08.894 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:21:09.153 LINK bit_array_ut 00:21:10.090 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:21:10.090 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:21:10.090 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:21:10.090 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:21:10.090 LINK iobuf_ut 00:21:10.349 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:21:10.609 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:21:10.609 LINK ctrlr_bdev_ut 00:21:10.868 LINK cpuset_ut 00:21:10.868 LINK subsystem_ut 00:21:10.868 LINK json_write_ut 00:21:11.128 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:21:11.386 LINK crc16_ut 00:21:11.386 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:21:11.644 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:21:11.644 LINK nvme_ns_cmd_ut 00:21:11.903 LINK rpc_ut 00:21:12.161 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:21:12.161 LINK nvme_ns_ocssd_cmd_ut 00:21:12.419 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:21:12.678 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:21:12.678 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:21:12.678 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:21:12.935 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:21:12.935 LINK crc32_ieee_ut 00:21:12.935 LINK nvme_pcie_ut 00:21:12.935 LINK nvmf_ut 00:21:12.935 LINK idxd_user_ut 00:21:12.935 LINK crc32c_ut 00:21:13.193 CC test/unit/lib/rdma/common.c/common_ut.o 00:21:13.451 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:21:13.451 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:21:13.451 LINK common_ut 00:21:13.451 CC test/nvme/fused_ordering/fused_ordering.o 00:21:13.710 CC test/unit/lib/util/dif.c/dif_ut.o 00:21:13.710 LINK vhost_ut 00:21:13.710 LINK fused_ordering 00:21:13.710 LINK crc64_ut 00:21:13.710 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:21:13.968 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:21:13.968 LINK nvme_poll_group_ut 00:21:13.968 LINK dif_ut 00:21:14.227 LINK idxd_ut 00:21:14.227 CC test/unit/lib/util/iov.c/iov_ut.o 00:21:14.227 LINK iov_ut 00:21:14.486 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:21:14.486 LINK nvme_qpair_ut 00:21:14.744 LINK ftl_l2p_ut 00:21:15.680 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:21:15.938 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:21:15.938 CC test/nvme/doorbell_aers/doorbell_aers.o 00:21:16.196 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:21:16.196 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:21:16.196 LINK nvme_quirks_ut 00:21:16.196 LINK doorbell_aers 00:21:16.196 CC test/unit/lib/util/math.c/math_ut.o 00:21:16.762 LINK pipe_ut 00:21:16.762 LINK math_ut 00:21:16.762 LINK ftl_band_ut 00:21:17.020 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:21:17.279 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:21:17.538 LINK rdma_ut 00:21:18.105 CC test/nvme/fdp/fdp.o 00:21:18.364 CC test/nvme/cuse/cuse.o 00:21:18.364 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:21:18.364 CC test/unit/lib/util/string.c/string_ut.o 00:21:18.623 LINK nvme_transport_ut 00:21:18.623 LINK fdp 00:21:18.623 LINK nvme_tcp_ut 00:21:18.881 LINK string_ut 00:21:19.140 LINK ftl_io_ut 00:21:19.140 LINK cuse 00:21:19.400 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:21:19.659 LINK ftl_bitmap_ut 00:21:20.234 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:21:20.507 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:21:20.766 CC test/unit/lib/util/xor.c/xor_ut.o 00:21:20.766 LINK ftl_mempool_ut 00:21:21.332 LINK xor_ut 00:21:21.332 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:21:21.590 LINK nvme_io_msg_ut 00:21:21.849 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:21:22.109 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:21:22.109 LINK ftl_mngt_ut 00:21:22.678 LINK ftl_sb_ut 00:21:22.678 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:21:22.937 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:21:23.197 LINK ftl_layout_upgrade_ut 00:21:23.197 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:21:23.197 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:21:23.197 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:21:23.456 LINK nvme_pcie_common_ut 00:21:23.456 LINK nvme_opal_ut 00:21:23.456 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:21:23.715 LINK transport_ut 00:21:23.715 LINK nvme_fabric_ut 00:21:23.973 LINK nvme_rdma_ut 00:21:24.231 LINK nvme_cuse_ut 00:21:26.130 08:07:31 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:21:26.389 make[1]: Nothing to be done for 'clean'. 00:21:30.574 08:07:35 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:21:30.574 08:07:35 -- common/autotest_common.sh@718 -- $ xtrace_disable 00:21:30.574 08:07:35 -- common/autotest_common.sh@10 -- $ set +x 00:21:30.574 08:07:35 -- spdk/autopackage.sh@48 -- $ timing_finish 00:21:30.574 08:07:35 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:30.574 08:07:35 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:21:30.575 08:07:35 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:30.575 + [[ -n 2961 ]] 00:21:30.575 + sudo kill 2961 00:21:30.575 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:21:30.585 [Pipeline] } 00:21:30.610 [Pipeline] // timeout 00:21:30.617 [Pipeline] } 00:21:30.636 [Pipeline] // stage 00:21:30.643 [Pipeline] } 00:21:30.661 [Pipeline] // catchError 00:21:30.671 [Pipeline] stage 00:21:30.676 [Pipeline] { (Stop VM) 00:21:30.692 [Pipeline] sh 00:21:30.974 + vagrant halt 00:21:35.164 ==> default: Halting domain... 00:21:39.362 [Pipeline] sh 00:21:39.639 + vagrant destroy -f 00:21:42.959 ==> default: Removing domain... 00:21:42.973 [Pipeline] sh 00:21:43.253 + mv output /var/jenkins/workspace/centos7-vg-autotest/output 00:21:43.264 [Pipeline] } 00:21:43.287 [Pipeline] // stage 00:21:43.293 [Pipeline] } 00:21:43.313 [Pipeline] // dir 00:21:43.318 [Pipeline] } 00:21:43.335 [Pipeline] // wrap 00:21:43.340 [Pipeline] } 00:21:43.356 [Pipeline] // catchError 00:21:43.365 [Pipeline] stage 00:21:43.367 [Pipeline] { (Epilogue) 00:21:43.382 [Pipeline] sh 00:21:43.663 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:58.550 [Pipeline] catchError 00:21:58.552 [Pipeline] { 00:21:58.572 [Pipeline] sh 00:21:58.860 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:59.118 Artifacts sizes are good 00:21:59.126 [Pipeline] } 00:21:59.143 [Pipeline] // catchError 00:21:59.155 [Pipeline] archiveArtifacts 00:21:59.162 Archiving artifacts 00:21:59.503 [Pipeline] cleanWs 00:21:59.514 [WS-CLEANUP] Deleting project workspace... 00:21:59.514 [WS-CLEANUP] Deferred wipeout is used... 00:21:59.520 [WS-CLEANUP] done 00:21:59.522 [Pipeline] } 00:21:59.540 [Pipeline] // stage 00:21:59.546 [Pipeline] } 00:21:59.562 [Pipeline] // node 00:21:59.567 [Pipeline] End of Pipeline 00:21:59.611 Finished: SUCCESS